More in

Flex Logix unveils AI edge inference chip

1 min read

Flex Logix Technologies has announced the availability of its InferX X1, which it claims is the industry’s fastest AI inference chip for edge systems.

InferX X1 has been designed to accelerate the performance of neural network models such as object detection and recognition - the device runs YOLOv3 object detection and recognition.

Crucially, the InferX X1 comes with a high-volume price point that will enable high-quality, high-performance AI inference to be implemented in mass market products selling in the millions of units, for the first time.

“Customers with existing edge inference systems are asking for more inference performance at better prices so they can implement neural networks in higher volume applications. InferX X1 meets their needs with both higher performance and lower prices,” said Geoff Tate, CEO and cofounder of Flex Logix. “InferX X1 delivers a 10-to-100 times improvement in inference price/performance versus the current industry leader.”

“The technology announced by Flex Logix is a game changer and will significantly expand AI applications by bringing inference capabilities to the mass market,” said Mike Gianfagna, principal at Gforce Marketing. “This is going to be a major disruptor in a market that is already forecast to grow exponentially in the future.”

The InferX X1 features a new architecture that combines the company’s XFLX double density programmable interconnect with a reconfigurable Tensor Processor consisting of 64 1-Dimensional Tensor Processors that are reconfigurable so that they can efficiently implement the wide range of neural network operations. Because reconfiguration can be done in microseconds, each layer of a neural network model can be optimised with full-speed data paths for each layer.

The InferX X1 and associated software will be available Q2 2021.