Tachyum validates Prodigy Universal Processor with Kubernetes

1 min read

Tachyum has announced that it has completed testing and validation of its Prodigy Universal Processor with Kubernetes for hyperscale, high-performance container management.

The success of Kubernetes on Prodigy Universal ensures Tachyum’s customers and partners of quick, easy, out-of-the-box testing and evaluation in containerised environments.

Tachyum’s software team deployed its Prodigy emulation with K3s server, a lightweight Kubernetes distribution that’s suitable for testing; two agents; and NGINX, an open-source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server.

“Demonstrating that Kubernetes can run natively on Prodigy is essential to customers and partners, such as those using Kubernetes behind performance-sensitive containerised applications,” explained Radoslav Danilak, founder and CEO of Tachyum. “Tachyum’s growing software ecosystem is addressing the needs of an increasingly wide range of data centre, hyperscale, and high-performance workloads that will benefit from Prodigy.”

Clustering, management and job scheduling in containerised environments can be difficult at scale. Prodigy-based data centres will be able to automate deployments, scaling, and the management of Kubernetes clusters, which can span across hosts on-premise, public, private, or hybrid clouds, scaling from one compute node up to massive deployments.

Prodigy has been developed to provide improved data centre performance, power, and economics, reducing CAPEX and OPEX significantly.

Because of its utility for both high-performance and line-of-business applications, Prodigy-powered data centre servers will be able to, according to Tachyum, seamlessly and dynamically switch between workloads, eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilisation.

Prodigy integrates 128 high-performance custom-designed 64-bit compute cores and is said to deliver up to 4x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.