Broadcom ships Tomahawk 6, world’s first 102.4 Tbps switch in a single chip

2 mins read

Broadcom has started shipping the Tomahawk 6 switch series, delivering the world’s first 102.4 Terabits/sec of switching capacity in a single chip.

Broadcom begins shipping Tomahawk 6 chip Credit: adobe.stock.com

This is double the bandwidth of any Ethernet switch that’s currently available on the market and the Tomahawk 6 has been built to power the next generation of scale-up and scale-out AI networks, delivering much greater levels of flexibility with support for 100G/200G SerDes and co-packaged optics (CPO).

Tomahawk offers a comprehensive set of AI routing features and interconnect options and has been designed to meet the demands of AI clusters with more than one million XPUs.

“Tomahawk 6 is not just an upgrade – it’s a breakthrough,” claimed Ram Velaga, senior vice president and general manager, Core Switching Group, Broadcom. “It marks a turning point in AI infrastructure design, combining the highest bandwidth, power efficiency, and adaptive routing features for scale-up and scale-out networks into one platform.

“Demand from customers and partners has been unprecedented. Tomahawk 6 is poised to make a rapid and dramatic impact on the deployment of large AI clusters.”

According to Kunjan Sobhani, lead semiconductor analyst, Bloomberg Intelligence, “AI clusters are scaling from tens to thousands of accelerators, turning the network into a critical bottleneck while expected to deliver unprecedented bandwidth and latency. By breaking the 100Tbps barrier and unifying scale-up and scale-out Ethernet, Broadcom’s Tomahawk 6 gives hyperscalers an open, standards-based fabric - free of proprietary lock-in - and a clear, flexible path to the next wave of AI infrastructure.”

The innovations of the Tomahawk 6 extend far beyond the chip, however, delivering full system-level power efficiency and cost savings, which have been enabled by Broadcom’s best-in-class SerDes and optics ecosystem.

With industry-leading 200G SerDes, it provides the longest reach for passive copper interconnect, enabling high-efficiency, low-latency system design with the highest reliability and lowest total cost of ownership (TCO).

The Tomahawk 6 family includes an option for 1,024 100G SerDes on a single chip, enabling customers to deploy AI clusters with extended copper reach and efficient use of XPUs and optics with native 100G interfaces.

For systems requiring optical connectivity, Tomahawk 6 will also be available with co-packaged optics, providing the lowest power and latency while reducing link flaps and improving long-term reliability – essential advantages for hyperscale AI network operators.

According to Broadcom, the Tomahawk 6’s architecture enables unified networks for AI training and inference. Cognitive Routing 2.0 in Tomahawk 6 features advanced telemetry, dynamic congestion control, rapid failure detection, and packet trimming, enabling global load balancing and adaptive flow control. These capabilities are tailored for modern AI workloads, including mixture-of-experts, fine-tuning, reinforcement learning, and reasoning models.

With scale-out and scale-up networking support, the Tomahawk 6 has been designed to meet all networking demands for emerging 100,000 to one million XPU clusters. Leveraging Ethernet for both scale-out and scale-up interfaces offers significant advantages for network operators, enabling them to use a unified technology stack and consistent operational tools across the entire AI fabric. It also enables fungible interfaces where cloud operators can dynamically partition their XPU assets into the optimal configuration for different customer workloads.

In summary the Tomahawk 6 Series key benefits include:

  • 102.4 Tbps of Ethernet switching in a single chip
  • Scale-up cluster size of 512 XPUs
  • 100,000+ XPUs in a two-tier scale-out network at 200 Gbps/link
  • 200G or 100G PAM4 SerDes with support for long-reach passive copper
  • Option for co-packaged optics
  • Cognitive Routing 2.0
  • Unmatched power and system efficiency for AI training and inference
  • Works with any NIC or XPU Ethernet endpoint
  • Support for arbitrary topologies, including scale-up, Clos, rail-only, rail-optimised, and torus
  • Compliant with Ultra Ethernet Consortium specifications