Tachyum to offer TPU Inference IP to edge and embedded markets

1 min read

Tachyum is offering its Tachyum TPU (Tachyum Processing Unit) intellectual property as a licensable core, allowing developers to take full advantage of AI when making IoT and Edge devices.

Tachyum’s Prodigy is the first Universal Processor combining General Purpose Processors, High Performance Computing (HPC), Artificial Intelligence (AI), Deep Machine Learning, Explainable AI, Bio AI and other AI disciplines with a single chip.

As a consequence of the tremendous growth being seen in the AI chipset market for edge inference, Tachyum said that it was looking to extend its proprietary Tachyum AI data type beyond the datacentre by providing its internationally registered and trademarked IP to outside developers.

Key features of the TPU inference and generative AI/ML IP architecture include architectural transactional and cycle accurate simulators; tools and compilers support; and hardware licensable IP, including RTL in Verilog, UVM Testbench and synthesis constraints.

Tachyum has 4b per weight working for AI training and 2b per weight as part of the proprietary Tachyum AI (TAI) data type, which will be announced later this year.

“Inference and generative AI is coming to almost every consumer product and we believe that licensing TPU is a key avenue for Tachyum to proliferate our world-leading AI into this marketplace for models trained on Tachyum’s Prodigy Universal Processor chip,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “As Tachyum is the only owner of the TPU trademark within the AI space, it is a valuable corporate asset to not only Tachyum but to all the vendors who respect that trademark and ensure that they properly license its use as part of their products.”

As a Universal Processor offering utility for all workloads, Prodigy-powered data centre servers can switch between computational domains (such as AI/ML, HPC, and cloud) on a single architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilisation, Prodigy is able to reduce CAPEX and OPEX significantly while delivering improved performance, power, and economics.

Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4,5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.