CES 2020 - CEVA's SenslinQ Platform to streamline development of contextually aware IoT devices

1 min read

CEVA, a licensor of wireless connectivity and smart sensing technologies, has unveiled SenslinQ, its first integrated hardware IP and software platform that aggregates sensor fusion, sound and connectivity technologies to enable contextually aware IoT devices.

Contextual awareness is becoming a mandatory feature of many devices such as smartphones, laptops, AR/VR headsets, robots, hearables and wearables, driven by OEMs and IT companies looking to add value and enhance the user experience.

The SenslinQ platform streamlines the development of these devices by centralising the workloads that require an intimate understanding of the physical behaviuors and anomalies of sensors. It collects data from multiple sensors within a device, including microphones, radars, Inertial Measurement Units (IMUs), environmental sensors, and Time of Flight (ToF), and conducts front-end signal processing such as noise suppression and filtering on this data. Then, applying advanced algorithms, SenslinQ creates “context enablers” such as activity classification, voice and sound detection, and presence and proximity detection. These context enablers can then be fused on-device or otherwise sent wirelessly (Bluetooth, Wi-Fi, NB-IoT) to a local edge computer or the cloud for determining and adapting the device to the environment in which it operates.

The SenslinQ platform incorporates both the hardware IP and software components required to enable contextually aware IoT devices. The customisable hardware reference design is composed of three pillars connected using standard system interfaces:

  • Arm or RISC-V MCU
  • CEVA-BX DSPs
  • Wireless Connectivity Island, such as RivieraWaves Bluetooth, Wi-Fi or Dragonfly NB-IoT platforms, or other connectivity standards provided by the customer or 3rd parties.

The SenslinQ software is comprised of a large portfolio of ready-to-use software libraries from CEVA and its ecosystem partners, which includes: Hillcrest Labs MotionEngine software packages for sensor fusion and activity classification in mobile, wearables, hearables, robots and more; ClearVox front-end voice processing, WhisPro speech recognition, and comprehensive DSP and AI libraries and Extensive 3rd party software components for Active Noise Cancellation (ANC), sound sensing, 3D audio and more

The SenslinQ platform is accompanied by the SenslinQ framework, a Linux-based Hardware Abstraction Layer (HAL) reference code and APIs for data and control exchange between the multiple processors and the various sensors.