comment on this article

Processors for AI applications

A family of artificial intelligence (AI) processors launched by CEVA has been designed for deep learning inference at the edge.

The NeuPro collection ranges from 2TOP/s for the entry-level processor to 12.5TOP/s for the most advanced configuration.

Ilan Yona, general manager of the vision business unit at CEVA, said: “It’s abundantly clear that AI applications are trending toward processing at the edge, rather than relying on services from the cloud. The computational power required, along with the low power constraints for edge processing, calls for specialised processors rather than using CPUs, GPUs or DSPs.”

The NeuPro family includes four AI processors, each comprising the NeuPro engine and VPU.

The engine includes the hardwired implementation of neural network layers among which are convolutional, fully-connected, pooling, and activation.

The NeuPro VPU is a programmable vector DSP that handles CEVA’s CDNN software and provides software-based support for new advances in AI workloads.

NeuPro supports both 8bit and 16bit neural networks, with an optimised decision made in real time. It will be available for licensing to select customers in the second quarter of 2018 and for general licensing in the third quarter of 2018.

CEVA will also offer the NeuPro hardware engine as a convolutional neural network accelerator.

Author
Bethan Grylls

Comment on this article


This material is protected by MA Business copyright See Terms and Conditions. One-off usage is permitted but bulk copying is not. For multiple copies contact the sales team.

What you think about this article:


Add your comments

Name
 
Email
 
Comments
 

Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

Related Articles