comment on this article

Process aims to take the thermal stresses off the electronics

'Big Data' is big and getting bigger. As the world moves from analogue to digital, all of that data passes through and is stored in data centres. And on top of the data stored there is also the move to do more computation in the cloud. The more data going in and around, the bigger the data centres get, the more power they use and the more heat is generated – although data centres that are devoted to storage do less processing and generate less heat than a centre whose main function is cloud computing.

A common measurement used in the data centre sector is PUE – Power Usage Effectiveness. In a perfect world this figure would be 1, meaning that all the power consumed by the data centre is being used for IT purposes. More typically, PUE figures are 1.2 or 1.3. This means a data centre requiring 100kW of power actually needs 130kW for a PUE of 1.3. The extra 30kW is largely used for cooling. Thermal management is therefore a problem in terms of the safe running of the electronics, the environmental performance and also the cost of running the centre.

Traditional cooling is done by blowing air around, which may satisfy, just, the first of these three problems, but does nothing to alleviate the second and third. Although air cooling – either forced or natural convection – accounts for maybe 80% of data centre cooling activities, another method that has gained some traction is using cold fluids flowing through pipes or plates that are in contact with the heat generating parts. This does require additional cooling equipment to get the fluid temperature down and 'fixturing' to make sure that the pipes are touching all the hot spots.

Cool supercomputer
However, a new technique is currently being demonstrated that claims to bring the all important PUE figure down to about 1.05 – a massive step forward in terms of energy efficiency.

The project, being conducted in the US, is a proof of concept demonstration of a supercomputer being cooled by a process based on 3M's new two phase immersion cooling technology.

Abel Ebongue, Technical Service Engineer for 3M, said: "It is a demonstrator to show how efficient two phase immersion cooling can be when we consider very high density design." And this design is very high density. The SGI ICE X, the fifth generation of the world's fastest distributed memory supercomputer, is based on Intel's Xeon processor E5-2600 hardware.

The SGI ICE X system, it is claimed, can scale seamlessly from tens of teraflops to tens of petaflops, and across technology generations, while maintaining uninterrupted production workflow.

"Through this collaboration with Intel and 3M, we are able to demonstrate a proof-of-concept showcasing an innovative capability to reduce energy use in data centres, whilst optimising performance," said Jorge Titinger, president and CEO of SGI. "Built entirely on industry standard hardware and software components, the SGI ICE X solution enables significant decreases in energy requirements for customers, lowering total cost of ownership and impact on the environment. We are delighted to work with Intel and 3M on this demonstration to illustrate the potential to further reduce energy in data centres, something imperative as we move to a more data intensive world."

Two phase immersion
Ebongue explained how the system works. "Two phase immersion cooling is a method of retrieving heat through direct contact with the fluid, and the mechanism for retrieval is phase change."

Devices are dropped down into the liquid. "Devices that are generating heat make the liquid boil," said Ebongue. "The boiling extracts the heat from the device, turning the liquid into the vapour phase. It is then condensed and the liquid goes back again and again in a loop."

The fluid used is 3M's NOVEC 649, a fluoroketone with the dielectric and thermodynamic properties suitable for this application. One important property is the boiling point, which is 49°C. The reason this is important is that the working temperature of the device will be dictated by the fluid boiling temperature, plus or minus a few degrees.

"In this concept, you don't look at a particular area to cool," said Ebongue. "All of the computer is immersed. There is no need to look at hot spots, the whole board will be at the same temperature."

The immersion tank is closed, but not sealed. This means it can be opened for hot swapping cards or maintenance. The hot gas is vented off and both the liquid and heat are recovered in a heat exchanger. Harvested heat can then be used for other process or facility purposes, while the liquid is then returned to the tank. "There is bound to be some loss," Ebongue observed. "The idea of the SGI experiment is to take measurements over the next weeks and months to see what losses there are. But the research we have done at the lab scale shows that fluid losses are minimal – less than 1%." He also pointed out that the repeated thermal cycling and phase changing has no effect on the chemical or physical properties of NOVEC 649.

Movement within the fluid occurs naturally when devices are hot enough to boil the fluid – a natural convection that provides uniformity of temperature in the fluid, so there is no need for a pump in the tank. The only additional energy input required is for a pump in the condenser to keep that cooling water on the move.

"As the backbone of the data economy, modern data centres must increase the raw performance they deliver, but also do so efficiently by containing power consumption and operating costs," said Charles Wuishpard, general manager of Intel's Workstation and High Performance Computing group. "Intel is continually innovating and improving microprocessor technologies to meet today's data centre demands and is working with companies like 3M and SGI to explore advanced cooling technologies that improve energy efficiency in data centres while also containing operating costs."

On top of this project, the three companies – 3M, SGI and Intel – are working with the US Naval Research Laboratory, Lawrence Berkeley National Labs and Schneider Electric subsidiary APC to deploy and evaluate an identical system with the ambition to demonstrate the viability of the technology at any scale.

It is believed that this technique will use only a tenth of the space of conventional air cooling and it eliminates costly air cooling infrastructure and equipment associated with conventional liquid cooling. And, although still at the demonstration stage, Ebongue summed up the proposition. "I can't say yet about the comparative capital costs of setting the system up, but anything with a PUE close to 1 is going to be interesting from a business point of view."

Author
Tim Fryer

Related Downloads
61851\P46-47.pdf

Comment on this article


This material is protected by MA Business copyright See Terms and Conditions. One-off usage is permitted but bulk copying is not. For multiple copies contact the sales team.

What you think about this article:


Add your comments

Name
 
Email
 
Comments
 

Your comments/feedback may be edited prior to publishing. Not all entries will be published.
Please view our Terms and Conditions before leaving a comment.

Related Articles