Gaudi AI training processor from Habana Labs

2 mins read

Habana Labs, a developer of AI processors, has unveiled the Habana Gaudi AI training processor which it claims will enable training systems based on these processors to deliver an increase in throughput of up to four times over systems built with equivalent number GPUs.

The architecture enables near-linear scaling of training systems performance, as high throughput is maintained even at smaller batch sizes, allowing performance scaling of Gaudi-based systems from a single-device to large systems built with hundreds of Gaudi processors.

Gaudi also brings another industry first to AI training: on-chip integration of RDMA over Converged Ethernet (RoCE v2) functionality within the AI processor, to enable the scaling of AI systems to any size, using standard Ethernet. As a result, Habana Labs’ customers will be able to use standard Ethernet switching for both scaling-up and scaling-out AI training systems.

Ethernet switches are multi-sourced, offering virtually unlimited scalability in speeds and port-count, and are already used in datacentres to scale compute and storage systems. In contrast to Habana’s standards-based approach, GPU-based systems rely on proprietary system interfaces, that inherently limit scalability and choice for system designers.

“With its new products, Habana has quickly extended from inference into training, covering the full range of neural-network functions,” commented Linley Gwennap, principal analyst of The Linley Group. “Gaudi offers strong performance and power efficiency among AI training accelerators. As the first AI processor to integrate 100G Ethernet links with RoCE support, it will enable large clusters of accelerators built using industry-standard components.”

The Gaudi processor includes 32GB of HBM-2 memory and is currently offered in two forms:

  • HL-200 – a PCIe card supporting eight ports of 100Gb Ethernet;
  • HL-205 – a mezzanine card compliant with the OCP-OAM specification, supporting 10 ports of 100Gb Ethernet or 20 ports of 50Gb Ethernet.

Habana is also introducing an 8-Gaudi system called HLS-1, which includes eight HL-205 Mezzanine cards, with PCIe connectors for external Host connectivity and 24 100Gbps Ethernet ports for connecting to off-the-shelf Ethernet switches, thus allowing scaling-up in a standard 19’’ rack by populating multiple HLS-1 systems.

Gaudi is the second purpose-built AI processor to be launched by Habana Labs in the past year, following the Habana Goya AI Inference Processor.

“Training AI models require exponentially higher compute every year, so it’s essential to address the urgent needs of the datacentre and cloud for radically improved productivity and scalability. With Gaudi’s innovative architecture, Habana delivers the industry’s highest performance while integrating standards-based Ethernet connectivity that enables unlimited scale,” said David Dahan, CEO and Co-founder of Habana Labs.

The Gaudi Processor is fully programmable and customisable, incorporating a second- generation Tensor Processing Core (TPC) cluster, along with development tools, libraries, and a compiler, that collectively deliver a more comprehensive and flexible solution. Habana Labs’ SynapseAI software stack consists of a rich kernel library and open toolchain for customers to add proprietary kernels.