Renesas and StradVision to develop vision technology for autonomous vehicles

2 mins read

Renesas and StradVision, a provider of vision processing technology solutions for autonomous vehicles, are to develop a deep learning-based object recognition solution for smart cameras used in next-generation ADAS applications and cameras for ADAS Level 2 and above.

Next-generation ADAS implementations are going to require high-precision object recognition capabilities to detect so-called vulnerable road users (VRUs) such as pedestrians and cyclists. At the same time, these systems must consume very low power. The solution from Renesas and StradVision achieves both and has been designed to accelerate the widespread adoption of ADAS.

“A leader in vision processing technology, StradVision has abundant experience developing ADAS implementations using Renesas’ R-Car SoCs, and with this collaboration, we are enabling production-ready solutions that enable safe and accurate mobility in the future,” said Naoki Yoshida, Vice President of Renesas’ Automotive Technical Customer Engagement Business Division. “This joint deep learning-based solution optimised for R-Car SoCs will contribute to the widespread adoption of next-generation ADAS implementations and support the escalating vision sensor requirements expected to arrive in the next few years.”

According to Junhwan Kim, CEO of StradVision: “This joint effort will not only translate into quick and effective evaluations, but also deliver greatly improved ADAS performance. With the massive growth expected in the front camera market in the coming years, this collaboration puts both StradVision and Renesas in excellent position to provide the best possible technology.”

StradVision’s deep learning–based object recognition software delivers high performance in recognising vehicles, pedestrians and lane marking. This high-precision recognition software has been optimised for Renesas R-Car automotive system-on-chip (SoC) products R-Car V3H and R-Car V3M, which incorporate a dedicated engine for deep learning processing called CNN-IP (Convolution Neural Network Intellectual Property), enabling them to run StradVision’s SVNet automotive deep learning network at high speed with minimal power consumption.

The object recognition solution resulting from this collaboration realises deep learning–based object recognition while maintaining low power consumption, making it suitable in mass-produced vehicles and encouraging ADAS adoption.

Key features of the deep learning-based object recognition solution include:

  • StradVision’s SVNet deep learning software is a powerful AI perception solution for the mass production of ADAS systems. It is highly regarded for its recognition precision in low-light environments and its ability to deal with occlusion when objects are partially hidden by other objects. The basic software package for the R-Car V3H performs simultaneous vehicles, person and lane recognition, processing the image data at a rate of 25 frames per second, enabling swift evaluation and POC development. StradVision provides support for deep learning-based object recognition covering all the steps from training through the embedding of software for mass-produced vehicles.
  • In addition to the CNN-IP dedicated deep learning module, the Renesas R-Car V3H and R-Car V3M feature the IMP-X5 image recognition engine. Combining deep learning-based complex object recognition and highly verifiable image recognition processing with man-made rules allows designers to build a powerful system. In addition, the on-chip image signal processor (ISP) is designed to convert sensor signals for image rendering and recognition processing. This makes it possible to configure a system using inexpensive cameras without built-in ISPs, reducing the overall bill-of-materials (BOM) cost.