Analysing full integrated systems to reduce end use energy demands
Dr Jon Summers, Scientific Leader in Data Centres, Research Institutes of Sweden, SICS North, discusses why the organisation is on a crusade to approach data centres as integrated systems and our experiments are geared to include the full infrastructure where the facility has IT in it

Analysing full integrated systems to reduce end use energy demands

Dr Jon Summers, Scientific Leader in Data Centres, Research Institutes of Sweden, SICS North, discusses why gains in reducing the end use energy demand of data centres are best addressed by analysing the full integrated system. Content supplied by the DCA.

In the world of data centres, the term facility is commonly used to indicate the shell that provides the space, power, cooling, physical security and protection to house information technology.

The data centre sector is made up of several different industries that purposely have a point of intersection that could loosely be defined as the data centre industry.

One very important argument is that a data centre exists to house IT but the facility and IT domains rarely interact unless the heat removal infrastructure invades the IT space. This is referring to the so called ‘liquid cooling’ of IT, whereas normally the facility-IT divide is cushioned by air.

At RISE SICS North we are on a crusade to approach data centres as integrated systems and our experiments are geared to include the full infrastructure where the facility has IT in it.

This holistic approach enables the researchers to measure and monitor the full digital stack from the ground to the cloud and the chip to the chiller, so we have built a management system that makes use of several opensource tools and generates more than 9GB of data per day from more than 30,000 measuring points within our ICE operating data centre.

Some of these measuring points are provided by in house designed and deployed wired temperature sensor strips that have magnetic rails allowing them to be easily mounted to the front and back of racks.

Recently, we have come up with a way to take control of fans in open compute servers in preparation for a new data centre build project where we will try and marry up the server requirements of air with what can be provided by the direct air handling units.

Before joining the research group in Sweden, I was a full-time academic in the School of Mechanical Engineering at the University of Leeds.

At Leeds, our research has been focused around thermal and energy management of microelectronic systems and the experiments made use of real IT, where we were able to integrate the energy required to provide the digital services alongside the energy needed to maintain systems within their thermal envelope. The research involved both air and liquid cooling, and for the latter we were able to work with rear door heat-exchangers, on-chip and immersion systems.

In determining the Power Usage Effectiveness (PUE) of air versus liquid systems it is always difficult to show that liquids are more ‘effective’ than air in removing the heat.

However, the argument for a metric that assesses the overhead of heat removal should include all the components whose function is to remove heat. So, for centrally pumped coolants in the case of liquid cooling, the overhead of the pump power is correctly assigned to the numerator of the PUE, but this is not the case for fans inside the IT equipment.

So what percentage of the critical load do the fans consume? Here we can do some simple back of the envelope calculations, but first we need to understand how air movers work. The facility fans are usually large and their electrical power, Pe, can be measured using a power meter.

This electrical power is converted into a volumetric flowrate that overcomes the pressure drop, ∆P, that is caused by ducts, obstacles, filters, etc. between this facility fan and the entrance to the IT.

If you look at a variety of different literature on this subject, such as fan curves and affinity laws then you may arrive at 1kW of electrical power per cubic metre per second of flow rate, VF.  Therefore, with an efficiency, η, of 50% the flowrate and pressure follow the simple relationship, ηPe= ∆P VF.

Thus, 1kW of power consumption will overcome 2,000 pascals of pressure drop at a flow rate of one cubic metre per minute. The IT fans are now employed to take over this volumetric flowrate of air to overcome the pressure drop across the IT equipment and exhaust the hot air at the rear of the IT equipment.

Again, there is literature on the pressure drop across a server and we calculated this at Leeds using a generic server wind tunnel. For a 1U server for example the pressure drop is around 350 pascals, but this does depend on the components inside the server.

Fans that sit inside a 1U server are typically at best 25% efficient and it is a commonly known fact that smaller fans are less efficient than larger ones. We can now use the simple equation, ηPe= ∆P VF, again to determine the electrical power that small less efficient fans require to overcome the server pressure drop at one cubic metre per second assuming no air has wondered off somewhere else in the data centre.

This yields an accumulated fan power of 1.4kW. But just how much thermal power can these fans remove? For this answer, we need to employ the steady state thermodynamic relationship of PT = ρcpVF∆T, making use of the density of air, ρ (=1.22kg/m3), its specific heat capacity at constant pressure, cp (=1006J/kg/K) and the temperature increase (delta-T), ∆T, across the servers. Now we must make a guess at the delta-T.

We can try a range, say five, 10 and 15oC, which with the same flow rate of one cubic metre per second we obtain the thermal power that is injected in the airstream in passing through racks of servers of 6136W, 12273W and 18410W for the three-respective delta-T values. It is then easy to see that in complete ideal conditions, the small server fans respectively consume 18.6%, 10.3% and 7.1% of the total server power assuming no losses in the airflow.

Given that these simple equations are based on a lot of assumptions that would yield conservative approximations, it is not unreasonable to say that IT fan power can consume more than 7% of the power of a typical 1U server.

It is now very tempting to add all of these figures together to show how partial PUE is affected by the rack delta-T. Gains in reducing end use energy demand of data centres are clearly best addressed by analysing the full integrated system.

Click below to share this article

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive