DCA spotlight: R&D projects are worth considering

DCA spotlight: R&D projects are worth considering

This month, Steve Hone, CEO, DCA, explains why the trade association supports R&D and provides details of DCA Partner EcoCooling’s involvement in a project to build and manage the most efficient data centre in the world.

Investing time in research can deliver real benefits to business. Many successful companies, such as those producing consumer goods or mass-market items, invest heavily in research and development (R&D).

Industries include computer software, semiconductor, information and communication technology, robotics and energy all have high R&D expenditure – R&D is critical to product innovation, the improvement of services and can also help to secure major advantages over competitors.

Taking the time to examine products offerings and identifying improvements can differentiate one organisation from another. In fact, R&D can contribute to raising a company’s market value and to boosting profitability.

From inception the DCA Trade Association has actively championed R&D initiatives and projects. This has enabled us to maintain strong ties with the academic world and to build enduring connections with major players in the data centre sector.

These organisations remain committed to maintaining the health and sustainability of the data centre sector.

Over the last eight years, through continued collaboration with academic, strategic and corporate partners the DCA has successfully helped to secure R&D funding for EU Commission projects such as PEDCA, the award winning EURECA project and the DEW COOL project which is a collaboration between Europe and China.

The DCA has also supported dozens more EU commission R&D projects including OPERA, SLALOM and the ICT Footprint project to name but a few.

Contact the DCA to find out more about research projects by emailing [email protected]

The DCA frequently facilitates networking and introductions that allow our partners to meet and discuss collaboration and research ideas and projects. For two years DCA corporate partner EcoCooling have been involved in an EU Horizon 2020 project with RI.SE and Boden Business Agency to build the most efficient data centre in the world.

EcoCooling’s report below provides interesting reading.

Unlocking the holy grail of efficiency – Holistic management of DC infrastructure achieves PUE of 1.02 in H2020 project  

Since 2017, DCA member EcoCooling has been involved in an EU Horizon 2020 funded ground-breaking pan-European research project to build and manage the most efficient data centre in the world.

With partners H1 Systems (project management), Fraunhofer IOSB (compute load simulation), RISE (Swedish Institute of Computer Science) and Boden Business Agency (Regional Development Agency) a 500kW data centre has been constructed using the very latest energy efficient technologies and employing a highly innovative holistic control system.

In this article we will provide an update on the exciting results being achieved by the Boden Type Data Centre 1 (BTDC-1) and what we can expect from the project in the future.

The project objective: To build and research the world’s most energy and cost-efficient data centre.

The BTDC is in Sweden, where there is an abundant supply of renewable and clean hydro-electricity and cold climate ideal for free cooling. Made up of three separate research modules/pods of Open Compute/conventional IT, HPC and ASIC (Application Specific Integrated Circuit) equipment, the EU’s target was to design a data centre with a PUE of less than 1.1 across all of these technologies. With only half of the project complete, the facility has already demonstrated PUEs of below 1.02, which we believe is an incredible achievement.

The highly innovative modular building and cooling system was devised to be suitable for all sizes of data centres. By using these construction, cooling and operation techniques, smaller scale operators will be able to achieve or better the cost and energy efficiencies, of hyperscale data centres.

We all recognise that PUE has limitations as a metric, however in this article and for dissemination, we will continue to use PUE as a comparative measure as it is still widely understood.

Exciting first results – Utilising the most efficient cooling system possible

At BTDC-1, one of the main economic features is the use of EcoCooling’s direct ventilation systems with optional adiabatic (evaporative) cooling which produces the cooling effect without requiring an expensive conventional refrigeration plant.

This brings two facets to the solution at BTDC-1. Firstly, in the very hot or very cold, dry days, the ‘single box approach’ of EcoCoolers can switch to adiabatic mode and provide as much cooling or humidification as necessary to maintain the IT equipment environmental conditions within the ASHRAE ‘ideal’ envelope, 100% of the time.

With the cooling and humidification approach I’ve just outlined, we were able to produce very exciting results.

Instead of the commercial data centre norm of PUE 1.8 or 80% extra energy used for cooling. We have been achieving a PUE of less than 1.05, lower than the published values of some data centre operators using ‘single-purpose’ servers – but we’ve done it with General Purpose OCP servers.  We’ve also achieved the same PUE using high density ASIC servers.

This is an amazing development in the cost and carbon footprint reduction of the data centres.  Let’s quickly look at the economics of that applied to a typical 100kW medium size data centre. The cooling energy cost is dropped from £80,000 to a mere £5,000.  That’s a £75,000 per year saving in an average 100kW medium size commercial data centre.

Smashing 1.05 PUE – Direct linking of server temperature to fan speed

What we did next has had truly phenomenal results using simple process controls.   What has been achieved here can be simply replicated in conventional server.  The ultra-efficient operation can only be achieved if the mainstream server manufacturers embrace these principles. I believe this presents a real ‘wake-up’ call to conventional server manufacturers – if they are ever to get serious about total cost of ownership and global data centre energy usage.

You may know that within every server, there are multiple temperature sensors which feed into algorithms to control the internal fans. Mainstream servers don’t yet make this temperature information available outside the server.

However, one of the three ‘pods’ within BTDC-1 is kitted out with about 140kW of Open-Compute servers. One of the strengths of the partners in this project is that average server measurements have been made accessible to the cooling system. At EcoCooling, we have taken all of that temperature information into the cooling system’s process controllers (without needing any extra hardware).

Normally, processing the cooling systems are separate with inefficient time-lags and wasted energy.  We have made them close-coupled and able to react to load changes in milliseconds rather than minutes.

As a result, we now have BTDC-1 ‘Pod 1’ operating with a PUE of not 1.8, not 1.05, but 1.03.

The BTDC-1 project has demonstrated a robust repeatable strategy for reducing the energy cost of cooling a 100kW data centre from £80,000 to a tiny £3,000.

This represents a saving of £77,000 a year for a typical 100kW data centre. Now consider the cost and environmental implication of this on the hundreds of new data centres anticipated to be rolled out to support 5G and ‘edge’ deployment.

Planning for the future – Automatically adjusting to changing loads

An integrated and dynamic approach to DC management is going to be essential as data centre energy-use patterns change.

What do I mean?  Well, most current-generation data centres (and indeed the servers within them) present a fairly constant energy load. That is because the typical server’s energy use only reduces from 100% when it is flat-out to 75% when it’s doing nothing.

At BTDC-1, we are also designing for two upcoming changes which are going to massively alter the way data centres need to operate.

Firstly, the next generations of servers will use far less energy when not busy. So instead of 75% quiescent energy, we expect to see this fall to 25%. This means the cooling system must continue to deliver 1.003 pPUE at very low loads. (It does.)

Also, BTDC-1, Pod 1 isn’t just sitting idly drawing power – our colleagues from the project are using it to emulate a complete Smart City (including the massive processing load of driverless cars). The processing load varies wildly – with massive loads during the commuter traffic ‘rush hours’ in the weekday mornings and the afternoons. And then (comparatively) almost no activity in the middle of the night. So, we can expect many DCs (and particularly the new breed of ‘dark’ edge DCs) to have wildly varying power and cooling load requirements.

Call to vendors

At BTDC-1, we have three research pods. Pod 2 is empty – waiting for one or more of the mainstream server manufacturers to step up to the ‘global data centre efficiency’ plate and get involved.

As a sneak peek of what’s to come in future project news, Pod 3 (ASIC) is now, using the same principles outlined in this article, achieving a PUE of 1.004. We are absolutely certain that if server manufacturers work with the partners at the BTDC-1 research project, we can help them (and the entire data centre world) to slash average cooling PUEs from 1.5 to 1.004.

The opportunity for EcoCooling to work with RISE (Swedish institute of computer science) and German research institute Fraunhofer has allowed us to provide independent analysis and validation of what can be achieved using direct fresh air cooling.

The initial results are incredibly promising and considering we are only halfway through the project we are excited to see what additional efficiencies can be achieved.

Click below to share this article

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive