Designed to bring high-performance AI inference acceleration to edge servers and industrial vision systems, the InferX X1 PCIe board have been designed to provide customers with enhanced AI inference capabilities where high accuracy, high throughput and low power on complex models is required.
Leveraging a dynamic TPU array architecture, the board is designed around low latency processing of Batch=1 workloads with a special focus on challenging Edge Vision applications. The InferX X1 offers leading edge performance, while remaining flexible allowing customers to seamlessly migrate to new AI models in the future and adapt to changing system requirements and protocols.
"The X1P1 has consistently demonstrated a superior value proposition for customers looking for efficient yet high-performance inference acceleration in edge applications," said Dana McCarty, Vice President of Sales and Marketing for Flex Logix's Inference Products. "Not only are we delivering on our promise to bring high-end AI capabilities to volume mainstream markets, but we are also allowing our customers to future proof their designs by enabling them to support evolving models, which is something many competitor products fail to provide."
The company claims that the InferX X1P1 board offers the most efficient AI inference acceleration for edge AI workloads such as Yolov3. Many customers need high-performance, low-power object detection and other high-resolution image processing capabilities for robotic vision, security, retail analytics and many other applications.
The InferX X1P1 board is available in production quantities starting this November 2021. Flex Logix also offers a software toolkit to support customer model porting to the X1P1 board.