Tiny vision processing chip uses 20 times less power

1 min read

A microchip designed to capture visual details from video frames, using 20 times less power than existing best-in-class chips, has been developed by the National University of Singapore (NUS).

NUS say that the low power consumption means the battery is 20 times smaller and has the potential to reduce the size of smart vision systems down to the millimetre range. The aim is to develop millimetre-sized smart cameras with near-perpetual lifespan.

The researchers also believe this chip may be a cost-effective solution for IoT applications such as ubiquitous safety surveillance in airports and key infrastructure.

Associate Professor Massimo Alioto of NUS (pictured), said: “IoT is a fast-growing technology wave that uses massively distributed sensors to make our environment smarter and human-centric. Vision electronic systems with long lifetime are currently not feasible for IoT applications due to their high-power consumption and large size. Our team has addressed these challenges through our tiny EQSCALE chip and we have shown that ubiquitous and always-on smart cameras are viable. We hope that this new capability will accelerate the ambitious endeavour of embedding the sense of sight in the IoT.”

Devices would need to be powered by solar cells which ‘harvest’ the energy from natural lighting in living spaces in order to achieve longer lasting operation. However, the size of the solar cells required would have to be in the centimetre scale or larger, thereby limiting the miniaturisation of vision systems.

Feature extraction power consumption currently ranges from mW to hundreds to mW. Reducing these systems to the millimetre scale would mean enabling a power consumption less than 1mW.

According to the NUS the EQSCALE chip can perform continuous feature extraction at 0.2mW. The extractor is apparently less than a millimetre on each side and powered by a solar cell no bigger than a few millimetres.

Alioto, explained: “This technological breakthrough is achieved through the concept of energy-quality scaling, where the trade-off between energy consumption and quality in the extraction of features is adjusted. This mimics the dynamic change in the level of attention with which humans observe the visual scene, processing it with different levels of detail and quality depending on the task at hand. Energy-quality scaling allows correct object recognition even when a substantial number of points of interests are missed due to the degraded quality of the target.”