Kinara and NXP to provide scalable AI solutions for deep learning at the edge

2 mins read

Kinara, a developer of AI processors for edge computing applications, has announced that it is collaborating with NXP Semiconductors on developing AI solutions for deep learning at the edge.

Customers of NXP’s AI-enabled product portfolio will now have the option to further scale their AI acceleration needs by using the Kinara Ara-1 Edge AI processor for high performance inferencing with deep learning models.

By working together, the two companies have been able to integrate the computer vision capabilities of the NXP i.MX applications processors with the performance- and power-optimised inferencing of the Kinara Ara-1 AI processor to provide computer vision analytics for a range of applications that include smart retail, smart city, and industrial.

Kinara’s Edge AI processor, the Ara-1, can deliver a combination of improved levels of performance and power for integrated cameras and edge servers. Kinara AI complements its processing technology with a set of development tools that allow its customers to convert their neural network models into optimised computation flows ready to be deployed on the Ara-1 chip.

"Intelligent vision processing is an exploding market that is a natural fit for machine learning. But vision systems are getting increasingly complex, with more and larger sensors, and model sizes are growing. To keep pace with these trends requires dedicated AI accelerators that can handle the processing load efficiently, both in power and silicon area,” said Kevin Krewell, principal analyst at TIRIAS Research. “The best modular approach to vision systems is a combination of an established embedded processor and a power-efficient AI accelerator, such as the combination of NXP’s i.MX family of embedded applications processors and the Kinara AI accelerator."

NXP’s AI processing solutions encompass its microcontrollers (MCUs), i.MX RT series of crossover MCUs and i.MX applications processor families, which represent a variety of multicore solutions for multimedia and display applications.

The company’s portfolio covers a very large portion of AI processing needs natively, and for any use case that requires even higher performance AI due to increases in frame rates, image resolution, and number of sensors, demand can be accommodated by integrating NXP processors with Kinara’s Ara-1 to deliver a scalable, system-level solution where customers can scale up and partition the AI workload between the NXP device and the Ara-1, while maintaining a common application software running on  the NXP processors.

“Our processing solutions and AI software stacks enable a very wide range of AI performance requirements, this is a necessity given our extremely broad customer base,” said Joe Yu, Vice President and General Manager, IoT Edge Processing, NXP Semiconductors. “By working with Kinara to help satisfy our customer’s requirements at the highest end of edge AI processing, we will bring high performance AI to smart retail, smart city, and industrial markets.”

“We see two general trends with our Edge AI customers. One trend is a shift towards a Kinara solution that significantly reduces the cost and energy of their current platforms that use a traditional GPU for AI acceleration. The other trend calls for replacing Edge AI accelerators from well-known brands with Kinara’s Ara-1 allowing the customer to achieve at least a 4x performance improvement at the same or better price,” said Ravi Annavajjhala, CEO, Kinara. “Our collaboration with NXP will allow us to offer very compelling system-level solutions that include commercial-grade Linux and driver support that complements the end-to-end inference pipeline.”