Flex Logix announces nnMAX AI inference IP in development

1 min read

Flex Logix Technologies, a supplier of embedded FPGA and AI Inference IP, architecture and software, has announced that its nnMAX AI inference IP is in development on GLOBALFOUNDRIES 12LP FinFET platform under an agreement with the US Government.

The nnMAX AI IP on GF 12LP, and extendable to GF's 12LP+ to enable enhanced power performance, is a solution intended for DSP acceleration and AI inference functions. The IP will also be available to commercial customers in 2H 2021.

"We are excited to expand our nnMAX IP portfolio in support of aerospace and commercial programs requiring high-performance edge inference solutions and manufacturing in an advanced US wafer fab," said Geoff Tate, CEO and co-founder of Flex Logix. "No other inference solution on the market delivers more throughput on tough models for less dollars and less watts, which is the number one requirement customers are asking for today."

nnMAX AI Inference provides TensorFlowLite/ONNX programmable inference with more throughput per unit of silicon area than current alternatives. Flex Logix's nnMAX is scalable, enabling a NxN array of nnMAX inference tiles to have N2 the throughput of a single tile. With nnMAX available on GF's 12LP, it will now be possible to manufacture more demanding processing needs for efficient AI inference chips in the United States.

"Flex Logix's nnMAX provides a unique reconfigurable data path option for AI inference to enable power optimised implementation," said Mark Ireland, vice president of ecosystem and design solutions at GF. "As a vital supplier of differentiated technologies, this IP is a great addition to GLOBALFOUNDRIES' 12LP, and extendable to 12LP+ that will enable clients, including the US government, to develop innovative solutions for AI training and inference applications."

Through its longstanding partnership, Flex Logix has proven silicon for EFLX eFPGA on GF's 12LP, with several SoC designs in production and many more in design across multiple customers.