Is the software defined data centre an inevitability?

4 mins read

The digital revolution is still in its infancy for, while we may be sharing information through our smartphones, tablets and desktop computers, dumb objects such as fridges, clothes and furniture are set to become digitally connected.

Speaking at the imec Technology Forum in Brussels earlier this year, Padmasree Warrior, a strategic advisor at Cisco, noted: "All industries are being affected and, whatever vertical you are involved in, you will have to contend not only with new technologies, but also new competitors, new business models and the pressure for more and faster innovation – all of which are combining to provide a unique and profound moment for us all."

Warrior talked about the gigantic streams of data being generated by sensors being deployed in cars, machinery in factories and from those worn on the body as the 'next big revolution'.

According to Warrior: "We are connecting not just devices, but also people to processes, data and things. It truly is the Internet of Everything."

When you consider that just 0.6% of the physical world is connected, then the connectivity load on the network necessary to enable the Internet of Things will be enormous – whether that is as a result of cloud computing, Big Data analysis or the growing use of video and streaming media.

There is going to be a relentless, exponentially increasing demand for more data centres and the current trajectory, according to a Data Center Journal article, is that data centre traffic is expected to hit 7.7zettabytes (1021) annually by 2017.

"In the future, data centres will evolve so that processing is optimised for the specific workloads that a given data centre operates," suggests Giles Peckham, regional marketing director, Xilinx. "In addition, as the scale and performance required continues to expand, you will see the increase in architectures to enable the networking and storage in a data centre to change to enable low latency and high performance scaling to hyper data centre levels."

"The real challenge for data centres is that the rate of growth of big data is higher than the rate of growth of memory bandwidth and networking access bandwidth from CPUs," says Mike Strickland, director of strategic technical marketing, Altera.

Power consumption is another issue that will have to be addressed as it is set to rise astronomically, if left unchecked.

"MPUs provide good performance, but suffer from high power consumption," suggests Peckham. "Software engineers find them the easiest devices to program but, in order to solve the performance portion of the data centre problem, many companies have been creating equipment with graphics processing units (GPUs) or CPU systems accelerated by GPU cards. GPUs have performance that is far superior to CPUs in data centre applications."

Giles Peckham, XilinxMike Strickland, Altera


Unfortunately, GPU power consumption is even worse. "Together, the performance is extremely fast, but power consumption is abysmal," Peckham contends.

Strickland adds that Intel and Microsoft are seeing increased integration of FPGAs with CPUs, with up to a third of cloud service provider server nodes forecast to be using FPGAs by 2020.

Microsoft has revealed that it is using FPGAs to accelerate Bing search, neural algorithms and 'smart NICs'," he says. "FPGAs can help with better text compression, encryption, filtering and deduplication of data before it reaches the CPU."

"Many companies are turning to an FPGA centric approach in which they pair FPGAs with other processors to maximise data centre equipment performance," says Peckham. "Vendors have demonstrated that a discrete FPGA paired with a discrete CPU raises power per card minimally, but improves performance dramatically, yielding a significant improvement in performance/Watt.

"Others believe that performance/ Watt can be further improved with a chip that combines an x86 processor core interconnected to FPGA logic on a single SoC."

FPGAs and SoCs make it possible for the life of a design to be extended and to adapt to changing requirements, introducing new features and complying with evolving data centre standards.

Data centre of the future
The data centre of the future will need to be open, secure, automated and most importantly, application relevant. As a result, according to both Peckham and Strickland, automation in the form of the emerging software defined data centre will continue to advance. As data centres continue to scale to larger and larger numbers of servers, they will need to be able to redefine themselves around evolving workloads and the applications they need to run.

For many, the software defined data centre is inevitable as it is more agile, secure and scalable than even the latest hardware defined data centre (HDDC) architectures.

According to its proponents, in this type of data centre, infrastructure is virtualised, delivered as a service and controlled by software. It is already being used by some of the largest cloud operators, such as Google, Facebook and Amazon.

According to HDDC's critics, it is costly and time consuming to implement, stifles innovation, binds organisations to specific hardware and, as a result, is less flexible.

Development happens much faster in software than it does in hardware and when software is separated from hardware, both will be able to evolve independently.

The fact that network virtualisation can mean that new applications can be deployed in minutes is also an attraction to business managers.

Data centre developers tend to be accustomed to programming x86-based architectures and come from a pure software programming background.

In response, the OpenCL language was designed to help developers move CPU programs to faster GPUs.

"Over the last two years, OpenCL has evolved and customers are now able to target FPGAs" says Peckham. "This development is opening up new possibilities for future data centre equipment architectures and even ubiquitous networks."

"In the past," says Strickland, "the lack of a programmer friendly interface was certainly a challenge. Altera has been providing an OpenCL interface for a few years now, which has demonstrated strong results. This compiler is also a foundation to offer high level language interfaces in the future.

Peckham believes a software-defined development environment will enable new levels of design team productivity and expand the reach of FPGAs, SoCs and 3D ICs to a much larger user base of software engineers.

"One new development environment – SDAccel – enables data centre equipment programmers with no FPGA experience to program FPGAs for data centre and cloud computing infrastructure equipment using OpenCL, C or C++," he explains.

The resulting FPGA based equipment will be able to offer much better performance/Watt than GPU and CPU-based equipment.

Data centres will need to advance their software defined capability as they continue to grow in size and increasingly need to be reoptimised for the applications they run.

Peckham concludes: "In the longer term, the trend toward virtualisation will continue. The key underlying requirement for the physical network will be to be able to deliver the performance and latency requirements the network requires to support virtualisation.

"This will drive increasing demand for high performance, low latency and highly reconfigurable solutions capable of optimising the physical network to deliver the metrics a specific application or workload requires."