Neurxcore introduces NPU product line for AI inference applications

1 min read

Neurxcore, a provider of Artificial Intelligence (AI) solutions, has announced the launch of a new product line of Neural Processor Units (NPU) for AI inference applications.

These NPUs have been built using an enhanced and extended version of NVIDIA's open-source Deep Learning Accelerator (Open NVDLA) technology, combined with patented in-house architectures.

The SNVDLA IP series from Neurxcore delivers improved energy efficiency, performance, and capability, with a primary focus on image processing, including classification and object detection. SNVDLA also offers versatility for generative AI applications and has already been silicon-proven, operating on a 22nm TSMC platform, and showcased on a demonstration board running a variety of applications.

The IP package also includes the Heracium SDK (Software Development Kit) built by Neurxcore upon the open-source Apache TVM (Tensor-Virtual Machine) framework to configure, optimise and compile neural network applications on SNVDLA products.

Neurxcore's product line caters to a wide range of industries and applications, spanning from ultra-low power to high-performance scenarios, including sensors and IoT, wearables, smartphones, smart homes, surveillance, Set-Top Box and Digital TV (STB/DTV), smart TV, robotics, edge computing, AR/VR, ADAS, and servers.

In addition to the NPUs Neurxcore is also offering a package that allows for the development of customised NPU solutions, including new operators, AI-enabled optimised subsystem design, and optimised model development, covering training and quantization.

Virgile Javerliac, founder and CEO of Neurxcore, commented, "80% of AI computational tasks involve inference. Achieving energy and cost reduction while maintaining performance is crucial."

The SNVDLA product line exhibits substantial improvements in energy efficiency, performance, and feature set compared to the original NVIDIA version, while also benefiting from NVIDIA's industrial-grade development.

The product line's fine-grain tunable capabilities, such as the number of cores and multiply-accumulate (MAC) operations per core, allow for versatile applications across a number of different markets.

According to Gartner’s 2023 AI Semiconductors report, titled Forecast: AI Semiconductors, Worldwide, 2021-2027, the use of artificial intelligence techniques in data centres, edge computing and endpoint devices requires the deployment of optimised semiconductor devices. Revenue from these AI semiconductors is forecast to be $111.6 billion by 2027, growing by a five-year CAGR of 20%.