The rise of the hyperscale data centre

4 mins read

The modern data centre is becoming more complex as it attempts to handle the proliferation in mobile devices and billions of newly connected devices, all of which are increasing the pressure on data infrastructure. Customer expectations have never been higher and they will expect a seamless level of service, even as the demand for data increases exponentially.

Currently, there are an estimated 22billion connected devices in an ever expanding network of people and machines and the scale of data being generated is seeing the rise of the hyperscale data centre, with massively scalable compute architectures capable of managing increasingly complex workloads and growth in data.

“We are seeing a fundamental change in the architecture of data centres,” says Philippe Delansay, co-founder and senior vice president of business development at Aquantia, a company that specialises in high-speed semiconductor connectivity solutions. “When you use an application, you send a single query to extract a set of data that resides on a server in a data centre. That request runs from the core, routed via an aggregation switch to the server and then back the same way. You have generated what is called a North/South traffic pattern.

“Social media is transforming that relationship. When you use Facebook or LinkedIn, you’re not requesting one piece of data, you’re asking for multiple data that resides on differing servers within the data centre. Your request is generating a flurry of East/West activities. As the topology is essentially North/South, you are trying to pull data from different servers when there is no direct connection via the aggregation switch from the core to the server.”

According to Delansay, this East/West topology will be at the heart of the next generation of hyperscale data centres, with switches providing an endless number of connections to different servers.

“With this architecture, you can go from any server to another without having to go through the core. Data centres will be more scalable and a lot bigger,” he suggests.

Delansay says that, when comparing the growth between total IP traffic and the aggregate amount of bandwidth available inside data centres, growth has remained balanced until now. “But that is about to change.”

The incumbency of 10Gbit/s in the long-haul network – it has been the dominant rate for long-haul networks since 1999 – has coincided with IP traffic increasing at high double-digit rates annually. As a result, the next step will be 100Gbit/s, bypassing the relatively expensive 40Gbit/s technology.

Responding to a sweeping industry trend toward higher density switch and server configurations, a number of optical solutions have been proposed for 100Gbit/s – but they are proving costly.

According to Crehan Research by 2020 hyperscale data centres will account for over 50% of total traffic, up from 25% at present.

Analysis of the deployment of switch and server connectivity in these centres has shown that a large concentration of them are within just a few metres.

“According to Crehan Research, the majority of direct server and storage Ethernet network connections tend to be just within 3m,” says Delansay.

Interconnect solutions that are optimised for hundreds of metres or even a couple of kilometres are not suitable for short range links, which means there is a need for complementary solutions.

"A new generation of hyperscale data centres will need a new type of infrastructure if they are to support 100Gbit/s"

“This raises the issue of how you address the challenges associated with high speed signalling over copper interconnects,” Delansay explains, “because most direct server and storage Ethernet network connections in hyperscale data centres use this technology. Traditionally, electrical interconnects have delivered the lowest cost and power options for the short reach space whereas optical solutions have been deployed in longer reach applications, thanks to low-loss of optical fibres.”

The cost of deploying optical transceivers is still prohibitively high so in order to handle this surge in bandwidth, Aquantia has developed a technology aimed at inter and intra rack connectivity. This is intended to complement longer reach optical connectivity solutions that are currently being used in hyperscale data centre and cloud computing environments.

“Our QuantumStream technology is a high-performance connectivity architecture which, we believe, could have the potential to revolutionise next-generation hyperscale data centres,” Delansay argues.

QuantumStream, developed by Aquantia through a strategic collaboration with Globalfoundries, is the first 100Gbit/s all-electrical technology capable of delivering low latency to networking applications.

“It is based, in part, on our Mixed-Mode Signal Processing (MMSP) and Multi-Core Signal Processing (MCSP) technologies,” Delansay, pictured right, explains. “The brute force approach to removing noise from the signal is to go digital, you use a big converter with high resolution and literally throw everything at the signal to remove the distortion. That approach is both inefficient and power hungry.

“The approach we’ve taken is to do some of the signal processing in the analogue domain. Technically, it is not easy, but if you can remove part of the noise there, you won’t need as much resolution in the converter. The digital circuitry is smaller, as is the converter, meaning less noise and power.

“Building on our experience, QuantumStream will provide a real leap in data rate performance over a single lane of copper, which has always been believed to be solely the realm of optical techniques,” Delansay suggests.

According to Delansay, this technology will enable system vendors and data centre operators to push towards higher performance and newer topologies in hyperscale architectures while keeping the reliability, low-cost and ease-of-use of electrical-based interconnects and without having to rip out existing cabling.

Aquantia’s collaboration with Globalfoundries has been crucial, suggests Delansay as it could bring technical expertise in high-speed SERDES design.

“A SERDES is the fundamental building block and responsible for the transport of data between switches, servers, routers and storage equipment in data centres that use a variety of channels, such as optical fibres, electrical copper cables and backplanes,” Delansay says.

Under the agreement with Globalfoundries provided access to its latest generation of SERDES capable of delivering speeds of up to 56Gbits/s.

Aquantia is developing low power, high performance, high density silicon solutions to the issues being faced by data centre operators

“We took that technology and combined the 56Gbit/s IP core with our MMSP and MCSP architectures, which we have developed over the past decade, to deliver a 100Gbit/s interconnect high performance SERDES solution,” Delansay explains.

The collaboration is intended to break through the perceived technical barriers in continuing the current electrical connectivity roadmap to 100Gbit/s and, crucially, QuantumStream technology can also be leveraged to deliver up to 400Gbit/s on conventional cables. This could help to resolve one of the more significant challenges facing the networking and data centre sector as it looks to migrate to 100Gbits/s.

“QuantumStream should be seen as a complementary 100Gbit/s solution that will serve both mass server and switch connectivity at shorter reaches over copper lane implementations,” Delansay concludes. “It will help to meet the tremendous growth in bandwidth and data demands, but also enable us to continue to use electrical interconnects in the next generation of hyperscale data centres.”