As data centres address more expansive and unique challenges, so too must their power distribution equipment meet those performance needs. Server cabinets and racks, even individual server units, need to be designed for maximum adaptability to the ever-changing power consumption requirements of their unique and demanding environments. Marc Cram, Director of New Market Development at Legrand, tells us more…
Whether dedicated to supercomputing or Artificial Intelligence, data centres are by their very nature unique in form factor and physical architecture. Sometimes they’ll fit into an existing building on campus, with a retrofit of new infrastructure to support the additional demands placed on the power and cooling systems of the facility. Other times they’re installed in an entirely new facility designed expressly for housing the machinery. In both instances, administrators must find custom solutions for delivering power, cooling, networking and so forth.
On the other hand, Edge Computing is designed to put applications and data closer to devices – and their users. But it brings a different set of challenges than the massive data centres used in supercomputing and AI applications. Space is a significant one in many cases; smaller enclosures mean even less space for the power distribution equipment. Because Edge Computing takes place remotely, you need to validate remote connectivity and possibly remediate any issues.
Data centres require power and lots of it.
It’s as simple as that. The design of data centres has always required solving how to feed their power needs and distributing the electrical power once it’s in the facility. Some of the world’s largest data centres can each contain many tens of thousands of IT devices and require more than 100 megawatts (M.W.) of power capacity – enough to power around 80,000 US households (U.S. DOE 2020).
With this immense power consumption demand comes the challenge of managing power distribution on a more granular level. Off-the-shelf and semi-custom solutions for remote access, power and white space infrastructure satisfy the needs of most enterprise and SMB data centre applications. More expansive and complex data centres often use similar solutions. However, the need for ongoing improvements in efficiency and sustainability leads many HPC installations, AI applications, hyperscale data centres and telecom operators to seek novel custom solutions to layout, power density, cooling and connectivity.
It’s a safe assumption that each software workload has its unique power consumption requirements. If form follows function, then the application drives architectural choices for hardware and its environment. Hyperscalers provide a roadmap for adding more space and more racks for more servers when we think we’ve reached, or are about to hit, our power consumption caps. But supercomputing wants everything physically close together to maximise throughput, while AI wants to be on specialised processors, and by its very nature, Edge Computing is inherently distributed.
MareNostrum is the central supercomputer at Barcelona Supercomputing Centre. Its general-purpose block has 48 racks with 3,456 nodes. Each node has two Intel Xeon Platinum chips, each with 24 processors, amounting to 165,888 processors and a main memory of 390 terabytes. All this is sitting in the Chapel Torre Girona, built in the 1920s. As one can imagine, placing the data centre without disturbing the chapel’s structure is an ultimate challenge of making the most with existing conditions.
The space dedicated to processing and CRACs (computer room air conditioners) leaves little room for distributing power going into the units. A situation like this has potential challenges for the deployment of PDUs, necessitating a customised solution:
• There is possibly little or no room at the back of the rack for a zero-u PDU, indicating it might have to sit in racks’ sides
• The likelihood of little or no airflow to cool the PDU suggests convection outside the rack cools the PDU
• Taller racks with more servers generate high outlet density situations
• The need for high power density for the racks may necessitate PDUs with monitoring capabilities
AI poses possible predicaments for PDUs
Artificial Intelligence regularly produces incredible accomplishments with computers, learning the subtleties of language, becoming your emotional health assistant, beating humans at Jeopardy and driving your car. But all these accomplishments require astonishing amounts of computing power – and electricity – to devise and train algorithms. A unique aspect of AI applications is their high internal bandwidth between boxes/nodes and optical connections, which can be power intensive.
When designing a power distribution plan for an AI facility, you often face similar challenges as you would with a supercomputer facility.
- You may need a high density outlet technology PDU that can help with capacity planning and maximising electrical power utilisation.
- AI facilities often require the use of custom racks, which demand ingenuity in the location of PDUs.
- High density and higher power installations test the limitations of standard PDUs.
- If your power density goes beyond what a C19 or other standard outlets can deliver.
Gaining an Edge with PDUs
Edge Computing occurs at or near the user’s physical location or the source of the data. By placing computing services closer to these locations, users benefit from faster, more reliable services. The explosive growth of IoT devices and new applications that require real-time computing power continues to drive Edge Computing systems. Edge Computing can occur in harsh environments like manufacturing facilities, warehouses, or outdoor locations (for example, oil rigs and mobile phone towers). These demanding environments may require the Edge data centre to operate in sizeable operating temperature ranges, which impose the need for support for environmental sensors.
Their placement at the data source may demand remote management capabilities and limited remote access control. Therefore, Edge Computing offers some distinctive challenges:
• Need for environmental monitoring sensors as a safeguard against temperature and power extremes outside the operating capabilities of the equipment
• Presents a case for remotely monitoring power consumption
• PDUs that have onboard communications capable of scheduling outlet power on and off
• PDUs capable of shedding the power load to maximise battery power uptime if the unit exceeds thresholds
• Your operating environment dictates that the PDU go beyond the usual 0-60 degrees celsius
When custom power components are the only real solution
As stated above, off-the-shelf and semi-custom solutions for remote access, power and white space infrastructure satisfy the needs of most enterprise and SMB data centre applications. However, the self-imposed drive for ongoing improvements in efficiency and sustainability worldwide has led HPC installations, AI applications, hyperscale data centres and telecom operators to seek novel custom solutions to layout, power density, cooling and connectivity. The push for renewable energy sources also influences the use of DC power versus conventional AC power.
Click below to share this article