IHWK and Microchip to develop an analogue compute platform for edge AI/ML inferencing

2 mins read

In a move designed to address the rapid rise of Artificial Intelligence (AI) computing at the edge, Intelligent Hardware Korea (IHWK) is developing a neuromorphic computing platform for neurotechnology devices and field programmable neuromorphic devices.

Microchip Technology, via its Silicon Storage Technology (SST) subsidiary, is assisting with the development of this platform by providing an evaluation system for its SuperFlash memBrain neuromorphic memory solution.

The solution is based on Microchip’s nonvolatile memory (NVM) SuperFlash technology and is optimised to perform vector matrix multiplication (VMM) for neural networks through an analogue in-memory compute approach.


The memBrain technology evaluation kit is designed to enable IHWK to demonstrate the power efficiency of its neuromorphic computing platform for running inferencing algorithms at the edge. The end goal is to create an ultra-low-power analogue processing unit (APU) for applications such as generative AI models, autonomous cars, medical diagnosis, voice processing, security/surveillance and commercial drones.

As current neural net models for edge inference may require 50 million or more synapses (weights) for processing, it is challenging to have enough bandwidth for the off-chip DRAM required by purely digital solutions, creating a bottleneck for neural net computing that throttles overall compute power.

The memBrain solution, by contrast, both stores synaptic weights in the on-chip floating gate in ultra-low-power sub-threshold mode and uses the same memory cells to perform the computations—offering significant improvements in both power efficiency and system latency. When compared to traditional digital DSP and SRAM/DRAM based approaches it delivers, according to Microchip, 10 to 20 times lower power usage per inference decision and can significantly reduce the overall bill of materials.

To develop the APU, IHWK is also working with Korea Advanced Institute of Science & Technology (KAIST), Daejeon, for device development and Yonsei University, Seoul, for device design assistance.

The final APU is expected to optimise system-level algorithms for inferencing and operate between 20-80 TeraOPS per watt.

“By using proven NVM rather than alternative off-chip memory solutions to perform neural network computation and store weights, Microchip’s memBrain computing-in-memory technology is poised to eliminate the massive data communications bottlenecks otherwise associated with performing AI processing at the network’s edge,” said Mark Reiten, vice president of SST, Microchip’s licensing business unit. “Working with IHWK, the universities and early adopter customers is a great opportunity to further prove our technology for neural processing and advance our involvement in the AI space by engaging with a leading R&D company in Korea.”

“Korea is an important hotspot for AI semiconductor development,” said Sanghoon Yoon, IHWK branch manager. “Our experts on nonvolatile and emerging memory have validated that Microchip’s memBrain product based on proven NVM technology is the best option when it comes to creating in-memory computing systems.”

Permanently storing neural models inside the memBrain solution’s processing element also supports instant-on functionality for real-time neural network processing. IHWK is leveraging SuperFlash memory’s floating gate cells’ nonvolatility to achieve a new benchmark in low-power edge computing devices supporting machine learning inference using advanced ML models.