Recogni raises $102mn to support next generation AI inference system

1 min read

Recogni, a developer of AI-based computing, has successfully closed a $102mn Series C funding round.

The round, co-led by Celesta Capital and GreatPoint Ventures, will be used to drive next generation system development for AI inference solutions that boost performance and power efficiency while providing a much lower total cost of ownership.

With AI technology advancing rapidly and model sizes increasing, AI inference, that is the process of using live data in trained models for predictions or task solutions, needs to be more performant and power-efficient than past solutions.

These improvements will be crucial for broadening AI applications in a sustainable manner and Recogni's scalable, power-efficient AI inference acceleration technique is seen as promising in terms of unlocking new AI computing possibilities in critical areas like generative AI and intelligent autonomy.

Marc Bolitho, CEO of Recogni, points out the urgent gap between rapidly expanding AI models and the slower evolution of computing capability. "The critical need for solutions that directly address the key challenges in AI inference processing - compute capability, scalability, accuracy, and energy savings - is more urgent than ever.”

Current Cloud-based AI training and inference tend to utilise power-intensive GPUs (graphics processing units), exerting substantial stress on the compute capacity, cooling, and power systems of data centres and are proving to be both financially and ecologically unsustainable.

By contrast, Recogni’s solution can achieve 1000 TOPS with less than 10ms of processing delay and below 25 watts of power consumption which is 10-20 times more power efficient than competing solutions.

“When you have a solution that achieves 10x higher compute density, 10x lower power and 13x less cost per query, it’s a no-brainer to invest and help bring that solution to market,” said Ashok Krishnamurthi, managing partner at GreatPoint Ventures. “The compute demand for AI applications is going to be significantly more than what the experts are forecasting, and addressing the power consumption piece of the puzzle is critical at this stage resulting in significantly lower operational costs.”