Over the last few years, there has been a constant focus on carbon reduction around IT systems, both inside and outside the data centre. Most of that attention has been on power, cooling, servers and storage. According to Ian Wilkie, Commercial Director at Brand-Rex, the next phase is to look at the network and the supply chain.
Look at any large company and you will find they have publically embraced the need for a “green” policy. For some, that is a real statement of intent, for others it is driven by a marketing team that realise customers are beginning to make ethical choices.
One of the ways that many companies can make quick gains around their green credentials is in the supply chain. Not everything, for example, can be green. If you are commissioning data centres or new distribution centres, you have to use concrete. According to industry figures, that could mean that every tonne of concrete releases 410kg/m3 of CO2e.
It is not just concrete that is the problem. Few CIOs want to know what the carbon footprint of each server, switch, hard drive, cabinet, printer or other part of the IT infrastructure represents. The same is true of the rest of the business. Facility teams are not generally interested in the carbon footprint created by each vehicle they purchase. Instead, the enterprise is focused on what it can control, its own carbon footprint.
That, however, is changing. Vendors such as Brand-Rex are now talking openly about the carbon footprint of some of their products. This means that enterprise customers can begin to factor into their supply chain, the carbon footprint of the products they purchase. It is a major step forward for any company that wants to understand its ethical and environmental responsibilities and the first time any major IT vendor has done this.
Examples of green project investments include a hydro-dam project in Turkey where the power for purifying copper is generated – plus a project to improve transportation energy-efficiencies through the introduction of regenerative braking technology. These two projects, along with other carbon reduction measures – such as pulp inserts, reduced manufacturing plant conversion energy and un-bleached boxes – helped Brand-Rex lower its products carbon footprint by over 1000 tonnes in a year.
Such investments and savings might not be applicable for every customer. What they do enable customers to know is that that they are buying from a carbon sensitive company. Such a move plays well in the corporate environmental impact space.
So how does this translate into the enterprise or data centre network?
Inside the data centre and comms room
Can savings be made in the network? The answer is yes, plenty! They start with simple things such as virtualisation, better cable management and choice of cabling.
Before any choice of cable is made, good management and practice is essential. Key gains such as better aisle containment, making sure blanking plates are fitted, that there are no air leaks around racks and that hot and cold air are not mixing are basic requirements. Don’t be surprised if, when you wander around your own data centre, you see some of these not being done properly. It requires effective management and process to make sure checks take place regularly.
A big problem for older data centres and comms rooms – and one that still exists in new builds – is the location and cooling of switches. Historically, switches have generally been located at the back of racks – but this is where the majority of the heat exists in today’s higher-powered racks. This means that they operate at very high temperatures and are the most likely item in a rack to fail due to overheating.
Hardware manufacturers recognise this risk and so the airflow through a switch is rarely from front to back. Instead it is often side to side or, in some cases, back to front. To cool switches most effectively, there is a need for data centre teams to get creative.
A good solution is to use ducts to channel cold air from the front of the rack and feed it to the input side of the switch. Another duct collects the hot exhaust air and vents it out the back of the cabinet. This increases reliability, prolongs switch life and helps to ensure that cold and hot aisles are properly separated without air short-circuits.
Ports mean power
Every active port on every switch consumes power. On older switches that’s a lot of power; because the ports are always on and always transmitting at a power level high enough to drive a full 100 metre link – even if the link is only 20m. Even more power is consumed day-in and day-out if the port supports power over Ethernet (PoE or PoE+),
A switch technology refresh can often be justified simply by the power savings that can be achieved across the estate. Newer, more advanced switches contain smart power control. Kicked off initially with the IEEE802.3az Green Ethernet standard, and now often referred to as EEE (energy efficient Ethernet), these devices switch off the transmitters in the PHY (physical interface) when not needed, detect when far-end devices like PCs are inactive and instigate sleep-mode. They even measure the length of the link when active – turning down their transmit power (and hence power draw) to match.
Most importantly, the operating systems of these switches have the facility to completely de-activate the PHYs of individual ports so that they consume no power at all. So take the trouble to do this and you’ll immediately save kilowatts of 24*7 power consumption. (And disconnect patch cords on unused ports to avoid confusion, increase airflow and re-use instead of buying more.)
Even though switch ports individually use only a watt or two, by the time you’ve multiplied that by 10,000 or so around the estate and each burning power 8,860 hours per year you will understand why – as far back as 2005 is was estimated that the combined pool of switch ports in the USA devoured 5.3Terawatt hours (Tw) of energy a year. Imagine what that figure would be were it not for technology like EEE?
Make space for air
10Gigabit/s cabling is required by the data centre standards and is now being installed in an increasing proportion of large enterprise projects. But some standard UTP Category 6A cables are bulky, difficult to work with and seriously impede both in-rack and under-floor airflow. The consequence of this is that cooling fans have to work harder and draw a lot more power.
Shielded Cat6A cables are smaller in diameter whilst some specifically designed for the data center like Brand-Rex zone cable is as thin as Cat5e cable freeing up the airflow and allowing fans to slow down and use less power.
Fibre or copper
It seems this debate will rage on for as long as we need to transport bits from A to B. The truth is there is no one-solution-fits-all answer.
If 10Gbit/s is the highest speed your network will ever need then copper is probably the answer. You can lay in Cat6A cables well in advance of need, but fit only EEE Gigabit switches. In fact you can save more power by doing this and then delaying the roll out of more power hungry 10Gb/s switches until they are absolutely needed. And then only upgrading enough switches to match the real need and not a blanket upgrade.
If fibre is an option for tour application then consider the power consumption for single-mode versus multi-mode versus copper and factor this into your business case. Bear in mind that one pair of single mode fibres will handle 1Gb/s, through 10 and 40 right up to 100Gb/s with no changes to the cabling. Copper cabling needs to be added in 10Gb/s chunks to get the same capacity and multi-mode fibre goes from needing two fibres for 10Gb/s to eight for 40Gb/s and 20 for 100Gb/s.
The use of virtualisation is well understood for servers inside modern data centres. What is not done well, and this has been due to a technology lag, is network virtualisation. Replace 1GbE copper with 10GbE copper or fibre or even 40GbE – and then virtualise the capacity.
This has several benefits.
- Less space requirement for the cabling.
- An option to increase network capacity per link should traffic demand it.
- Easier to create redundant links by running two fibres rather than 20 copper cables.
- Longevity as fibre supports higher speeds now and will not need to be replaced as switches are replaced provided single-mode is used.
There are some drawbacks as well.
- There are many different multi-mode and single-mode fibre standards which means you need to be careful what type of fibre you are using.
- Fibre is inherently more difficult to patch or replace onsite although this can be overcome by training.
- Fibre ports generally draw more power than copper but this can be offset by the ability to virtualise multiple copper cables.
The network is often overlooked as an opportunity to improve power efficiency and reduce carbon footprint. Yet network vendors such as Brand-Rex are able to deliver carbon savings as part of the supply chain and as part of the equipment in the data centre. With environmental concerns rising up the corporate governance chain, it’s time to take a hard look at your network and how it can deliver unexpected benefits.