Outlook 2016: Bringing a ‘green lining’ to The Cloud

5 mins read

With data centres consuming close to 2% of all electricity generated, it’s time to push power efficiency into The Cloud.

The broadband and smartphone revolutions have dramatically changed how we consume and interact with information in our professional and private lives. Storage and computing servers, colloquially known as ‘the Cloud’, are increasingly hosting most of the major applications. They use fast internet connections and extremely powerful resources to consolidate and process far-flung data and respond rapidly to users. This paradigm holds the promise of instant and always-on response, ubiquitous access and significantly lower capital investment for clients.

Cloud servers are an indispensable part of our daily activities, whether it is for consumer applications like Netflix, Facebook and Siri, or the industrial Machine-to-Machine (M2M) activity that is the basis of the Internet of Things (IoT), or enterprise solutions such as SAP and SalesForce.com.

Growth of the Cloud

Every time a person runs a Google search, watches a YouTube video, streams a Netflix movie or posts videos and photos to Facebook, the data centres these companies manage are running billions of operations and drawing millions of Watts of electricity. Driven by these forces, server capacity has grown at a remarkable rate. Cloud computing is estimated to have grown from essentially zero in 2006 – when Amazon launched its Amazon Web Services (AWS) platform – to a market worth $58billion in 2014. According to Forrester Research, the public cloud market (excluding captive data centres from Amazon, Google and others) is predicted to reach $191bn by the end of the decade. By way of comparison, WSTS forecasts the entire semiconductor industry will generate worldwide sales of $347bn in 2015.

Cloud computing revenues include the effects of lower prices as the technology matures and competition increases. Even more amazing has been the cloud’s growth in raw compute horsepower. By one estimate, AWS, the leading cloud service provider, has deployed more than 2.8million servers worldwide. Architecturally, servers have also evolved significantly by moving to hyperscale and multi-threaded structures, while processor cores have improved their raw throughput significantly. Additional design techniques, such as being able to change clock speed, supply voltage on the fly and vary the number of simultaneously operational cores, have all enabled greater dynamic response to computing load requirements. Nevertheless, they have also added significant complexity to power delivery requirements.

Growing consumption

Most relevant is the electrical energy consumed by these data centres: as the installed base of servers has mushroomed, so too has their energy consumption. Obtaining exact company numbers is difficult, but one exemplary data centre designed to consume 3MW of power hosts more than 8000 servers. Google estimated in 2011 that its data centres alone drew about 260MW continuously, which is about 25% of the output of a state-of-the-art nuclear power plant.

To ease power transmission challenges, the vast majority of the data centres worldwide are located close to massive power sources, like the Columbia River hydroelectric power system. Estimates produced in 2010 claimed that data centres represented between 1% and 1.5% of global electricity consumption, which is equivalent to Brazil’s total consumption. In the US, data centres consume closer to 2% of the country’s total electricity output, which is the equivalent of New Jersey’s total consumption. Essentially, by the end of 2010, data centres had added another New Jersey to the US electric grid – and the load continues to grow.

This massive growth in power consumption has had a significant economic impact. While processor cores may offer greater processing capability, as they follow Moore’s Law and architecture improvements, their voltages have not scaled fast enough to lower overall power consumption. Data centres primarily use power in two ways: to supply the needed energy for the computers; and to cool them sufficiently, keeping systems within their operational range. Consequently, small improvements in the efficient delivery of power have a leveraged beneficial impact on the bottom line.In addition to the reduced power bill, efficient power delivery can also increase data centre capacity for a given budget – a very important consideration given that the installed capacity is continuing its brisk pace of double-digit annual growth.

Power distribution

State-of-the-art data centre power distribution consists of a series of stepped-down voltages followed by point-of-load power delivery. Raw efficiency is the greatest challenge, but power systems can also add capabilities in a few different ways to enable lower energy consumption.

* Multi-phase operation. The latest generation of infrastructure power converters supports multi-phase operation and maintains the power delivered from light-load to peak-load at a close to peak efficiency. They achieve close to peak efficiency by parallelising multiple power delivery phases, which the controller modulates based on the power draw requirement. A typical multi-phase system has the regulator modulating each inductor to provide a variable amount of current.

Multi-phase operation improves on a key shortcoming of single-phase converters, where the efficiency peaks at a nominal load, but drops at very high loads. Instead, multi-phase systems can select the number of phases intelligently, depending on the load.Flattening the efficiency curve across a much larger portion of the operating range frees data centre planners from choosing between optimising for typical and maximum workloads.

* Parallel power channels. Dividing processor cores into ‘power islands’ enables parts of the system to shut down when they are not in use. Power delivery systems now comply with the need to provide multiple simultaneous rails.

* Communication. Today’s processor cores communicate anticipated power capacity to the power converters via a digital bus (typically PMBus). The changing load can be a function of additional cores coming on-line, variation of processor clock speed, or the knowledge that the software is processing a particularly intensive sequence. With insight into the expected loading, the controllers can maximise the efficiency across the load curve. The ability to manage the duration and level of energy consumed provides another big advantage to service providers: they can use the duration and intensity of system activity to calculate the billings for each process.

What’s next?

In the very near future, multiphase controllers will offer fully-digital control loops and integrate sequencing, telemetry and advanced fault handling. They will also offer multiple processor interface options and support for smart power stages with integrated drivers and synchronous FETs.

Power supply designers will be able to use a software GUI – such as Intersil’s PowerNavigator – to quickly configure, validate and monitor all power conversions and operating parameters for their power supply. PowerNavigator can also change any parameter, telemetry or power rail sequencing with only a few mouse clicks. As digital multiphase controllers come to market featuring integrated sequencing and other new features, systems employing multiple power supplies can bring up their voltage rails in the right sequence. System power can be provisioned and the voltage converters will essentially power the loads at the appropriate level, with the required efficiency. These are the compelling advantages of implementing digital control in the next generation of data centre ‘cloud’ systems.

Green lining

There is a green lining to this ‘cloud’; even if it uses a significant amount of energy, the efficient consolidation of computing tasks in the cloud – which employs some of the most powerful computation systems ever to exist – still holds the promise of lowering the overall amount of energy required to perform tasks across discrete systems. The data centre’s evolving power delivery requirements offer semiconductor manufacturers significant opportunities to innovate and create new breakthroughs.


Intersil is a leading provider of innovative power management and precision analogue solutions. The company's products form the building blocks of increasingly intelligent, mobile and power hungry electronics, enabling advances in power management to improve efficiency and extend battery life. With a deep portfolio of intellectual property and a rich history of design and process innovation, Intersil is the trusted partner to leading companies in some of the world's largest markets, including industrial and infrastructure, mobile computing, automotive and aerospace.