LeapMind unveils ultra low-power AI inference accelerator IP

1 min read

LeapMind, a Japanese company specialising in deep learning technology, has announced the development of 'Efficiera', an ultra-low power AI inference accelerator IP for ASIC and FPGA circuits, and other related products.

Efficiera is an AI Inference Accelerator IP designed for Convolutional Neural Network (CNN) inference calculation processing; it functions as a circuit in an FPGA or ASIC device.

Its extreme low bit quantization technology,minimises the number of quantized bits to 1–2 bits, does not require cutting-edge semiconductor manufacturing processes or the use of specialised cell libraries to maximise the power and space efficiency associated with convolution operations, which account for a majority of inference processing.

It will enable the inclusion of deep learning capabilities in various edge devices that are technologically limited by power consumption and cost, such as consumer appliances (household electrical goods), industrial machinery (construction equipment), surveillance cameras, and broadcasting equipment as well as miniature machinery and robots with limited heat dissipation capabilities.

LeapMind said that it was simultaneously launching several related products and services. These include: Efficiera SDK, a software development tool providing a dedicated learning and development environment for Efficiera, the Efficiera Deep Learning Model for efficient training of deep learning models, and Efficiera Professional Services, an application-specific semi-custom model building service.

Efficiera has been designed to offer both power and space efficiencies, contributing to power savings and providing cost reductions in AI-equipped products. In addition, because the circuit information is licensed rather than being provided as a module or device, customers can integrate Efficiera within a device featuring other circuits, thereby contributing to reduced BoM (Bill of Materials) costs for mass-produced products equipped with AI capabilities.

The circuit configuration:

  • Power Savings: The power required for convolutional processing is reduced by reducing the amount of data transfer and the number of bits.
  • Performance: The number of calculation cycles can be reduced by reducing the calculation logic, thereby improving calculation performance relative to area and on a per cycle basis.
  • Space savings: The silicon area is reduced while maintaining performance by reducing the calculation logic using 1–2 bit quantization; thus, the area per computing unit is minimised.

According to the company target applications include:

  • "Hazard Proximity Detection" using object detection
    • Helps ensure safety when using industrial vehicles such as construction machinery, by detecting surrounding people and obstacles.
  • "High quality video streaming" using noise reduction
    • Improves image quality by eliminating image noise when shooting under low-light conditions and by blocking noise caused by image codecs.
  • "Higher resolution for video footage" using super-resolution
    • Converts low-resolution video data into resolutions suitable for display devices.