GreenWaves announces next generation GAP9 hearables platform

2 mins read

GreenWaves Technologies has unveiled a next-generation hearables platform that's based on its GAP9 IoT application processor using GLOBALFOUNDRIES (GF) 22FDX solution.

GreenWaves is targeting the fast-growing hearables market, which is forecasted to reach over one billion units in 2024. GAP9’s unified and easy-to-program architecture significantly reduces the energy required for normal activities such as music playback, active noise cancellation and voice commands, and frees up power to improve audio quality through neural network-based features including deep noise reduction and acoustic scene detection.

GAP9 uses two innovative features of GF’s 22FDX solution – adaptive body bias (ABB) and eMRAM. GAP9 uses ABB to narrow the sign-off window, which contributes significantly to its ultra-low power consumption. GAP9 processes neural networks with a power efficiency of 330µW/GOP, at the same time providing market-leading energy performance for low-power, low-latency digital signal processing (DSP) operations such as audio filtering.

“What is really exciting our GAP9 partners is the unprecedented room for innovation that we are offering them,” said Loic Lietar, CEO of GreenWaves. “Processing a music stream with active noise cancellation consumes less than 10 percent of GAP9 resources, leaving plenty of headroom for dramatically improving the audio comfort or other new cutting-edge features, at the same time simplifying their development. GF’s 22FDX platform, with its incredible performance, ultra-low power capability, low-leakage, and flexibility, was instrumental for enabling us to hit our performance targets.”

In addition to ABB, GAP9 also takes advantage of 2MB of GF’s advanced embedded non-volatile memory (eMRAM) feature of its 22FDX solution, which has allowed GreenWaves to reduce by 3.5 times the energy used for transfers of neural network parameters.

GAP9’s architecture combines an autonomous, intelligent peripheral controller and streamer capable of processing data on the fly, with a trans-precision compute cluster capable of energy scaled performance from a few MOPs up to 150 GOPs. The level of performance provided enables real-time, ultra-low power processing of state of the art recurrent and convolutional neural networks (NN). With GAP9, hearable and wearable products can now provide voice pickup features that were up to now only feasible in cloud-assisted applications.

The intelligent peripheral controller in GAP9 includes a new building block for digital filtering: the smart filter unit (SFU). The SFU is an application specific stream processor designed to blend ultra-low latency, ultra-low power filtering with the flexibility of traditional cores. The SFU’s data is streams, its instructions are filters and its program is a graph of filters. The SFU effectively 'revolutionises' Audio Filter development for Active Noise Cancellation, Spatial Sound and other general music and voice stream filtering, blending microsecond latency with AI based filter adaptation at reduced energy levels.

The combination of the intelligent peripheral controller and streamer and the cluster delivers a single, unified architecture for control application, NN, DSP and filtering.

GAP9 is intended to simplify hearables development through a homogeneous design, which provides DSP, Neural Network acceleration and ultra-low latency audio streams processing with a RISC-V based core architecture combined with a complete development flow for integrated neural network and audio filter design.