CEVA unveils enhanced NeuPro-M NPU IP family

2 mins read

CEVA has enhanced its NeuPro-M NPU family, in a move designed to address the processing needs of the next era of Generative AI.

The company’s NeuPro-M NPU architecture and tools have been extensively redesigned to support transformer networks in addition to CNNs and other neural networks, as well as provide support for future machine learning inferencing models.

According to CEVA this will enable highly-optimised applications leveraging the capabilities of Generative and classic AI to be seamlessly developed and run on the NeuPro-M NPU inside communication gateways, optically connected networks, cars, notebooks and tablets, AR/VR headsets, smartphones, and any other cloud or edge use case.

According to Ran Snir, Vice President and General Manager of the Vision Business Unit at CEVA, “Transformer-based networks that drive Generative AI require a massive increase in compute and memory resources, which calls for new approaches and optimised processing architectures to meet this compute and memory demand boost. Our NeuPro-M NPU IP is designed specifically to handle both classic AI and Generative AI workloads efficiently and cost-effectively. It is scalable to address use cases from the edge to the cloud and is future proof to support new inferencing models.”

ABI Research forecasts that Edge AI shipments will grow from 2.4 billion units 2023 to 6.5 billion units in 2028, at a common annual growth rate (CAGR) of 22.4%.

Generative AI is set to play a vital role in underpinning this growth, and increasingly sophisticated and intelligent edge applications are driving the need for more powerful and efficient AI inferencing techniques. In particular, the Large Language Models (LLMs) and vision and audio transformers used in generative AI can transform products and industries but introduce new levels of challenges in terms of performance, power, cost, latency and memory when running on edge devices.

Reece Hayden, Senior Analyst, ABI Research, said, “The hardware market for Generative AI today is heavily concentrated with dominance by a few vendors. To deliver on the promise of this technology, there needs to be a clear path to lower power, lower cost inference processing, both in the cloud and at the edge. This will be achieved with smaller model sizes and more efficient hardware to run it.”

By evolving inferencing and modelling techniques, new capabilities for leveraging smaller, domain-specific LLMs, vision transformers and other generative AI models at the device-level are set to transform applications. Crucially, the enhanced NeuPro-M architecture is highly versatile and future proof thanks to an integrated VPU (Vector Processing Unit), supporting any future network layer.

Additionally, the architecture supports any activation and any data flow, with true sparsity for data and weights that enables up to 4X acceleration in performance, allowing customers to address multiple applications and multiple markets with a single NPU family. To enable larger scalability that is required by diverse AI markets, the NeuPro-M adds new NPM12 and NPM14 NPU cores, with two and four NeuPro-M engines, respectively, to easily migrate to higher performance AI workloads, with the enhanced NeuPro-M family now comprising four NPUs – the NPM11, NPM12, NPM14, and NPM18.

The NeuPro-M offers peak performance of 350 TOPS/Watt at a 3nm process node and capable of processing more than 1.5 million tokens per second per watt for a transformer-based LLM inferencing.

Accompanying the enhanced NeuPro-M architecture is a revamped comprehensive development tool chain, based on CEVA's network AI compiler, CDNN, which is architecture aware for full utilisation of the NeuPro-M parallel processing engines and for maximising customer’s AI application performance.

The CDNN software includes a memory manager for memory bandwidth reduction and optimal load balancing algorithms, and is compatible with common open-source frameworks, including TVM and ONNX.

The NPM11 NPU IP is available now for customer deployment today, with the NPM12, NPM14, and NPM18 available for lead customers.