Prophesee working with Qualcomm on neuromorphic vision technologies

1 min read

Prophesee is collaborating with Qualcomm Technologies to optimise its event-based Metavision sensors for use with premium Snapdragon mobile platforms.

The companies said that they want to bring the speed, efficiency, and quality of neuromorphic-enabled vision to mobile devices and want to provide mobile device developers with a more efficient way to leverage the Prophesee sensor’s ability to improve camera performance, particularly in fast-moving dynamic scenes and in low light, using its event-based continuous and asynchronous pixel sensing approach.

Prophesee is also working on a development kit to support the integration of the Metavision sensor technology for use with devices that contain next generation Snapdragon platforms.

“We believe this is game-changing technology for taking mobile photography to the next level and our collaboration on both the technical and business levels will help drive adoption by leading OEMs,” said Judd Heape, VP, Product Management at Qualcomm Technologies. “Prophesee’s pioneering achievements with event cameras’ shutter-free capability offer a significant enhancement to the quality of photography available in the next generation of mobile devices powered by Snapdragon, even in the most demanding environments, unlocking a range of new possibilities for Snapdragon customers.”

“Through this collaboration, product developers will be able to dramatically enhance the user experience with cameras that deliver image quality and operational excellence not available using just traditional frame-based methods,” said Luca Verre, CEO and co-founder of Prophesee.

Prophesee’s sensors focus only on changes in a scene, pixel by pixel, continuously, at extreme speeds. Each pixel in the Metavision sensor embeds a logic core, enabling it to act as a neuron. They each activate themselves intelligently and asynchronously depending on the amount of photons they sense. A pixel activating itself is called an event. In essence, events are driven by the scene’s dynamics, not an arbitrary clock anymore, so the acquisition speed always matches the actual scene dynamics.

High-performance event-based deblurring is achieved by synchronising a frame-based and Prophesee’s event-based sensor. The system then fills the gaps between and inside the frames with microsecond events to algorithmically extract pure motion information and repair motion blur.

A development kit featuring compatibility with Prophesee sensor technologies is expected to be available this year.