Power consumption remains a hot topic for data centers in 2013. The exploding use of online services and mobile devices are increasing server and data center loads exponentially – with this come the inevitable higher power demands. But with many of the easy fixes in data center power reduction, such as IT virtualization, centralization and modernization already completed, how can enterprises keep control over their future power requirements?
energy and data centers: where are we?
A report from analysts Gartner in 2010, found that energy costs represented a 12% of all data center expenditure. More recent figures seem to indicate that the decreasing cost of computer power is actually making power one of the largest costs of all, perhaps up to 25%.
A recent New York Times article claimed that worldwide data centers “use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants”. Part of the problem is that users tend to not delete all their old material they store in the cloud. So the holiday photos from five years ago are still sitting there on some server which is consuming power.
Cutting power consumption in data center offers three key advantages:
- it plays an important part in reducing the enterprise carbon footprint
- it ensures energy security for the future
- and perhaps most importantly it can significantly reduce overall costs
measuring energy efficiency
Leaving aside for one moment whether all of this power consumption is actually being used for something worthwhile, how much of it is actually going on powering the IT equipment itself? The IT industry has a yardstick for this, called the power usage efficiency (PUE) rating. This measures how much of the power drawn by the data center is used on powering the IT equipment, compared to other requirements. It defines all the cooling, lighting and ancillary power as a power overhead.
While calculating the PUE can be complex, particularly in mixed-use facilities, it gives data center operators a reference figure for their energy efficiency. The closer to 1, the higher the proportion of energy is used for computing. Google, for example has published the PUE for all its data centers and has seen its average PUE fall from 1.23 in Q3 2008 to 1.12 in Q4 2012.
first step: tackling cooling
So there are clearly some actions that you can take to ensure that all your power is directed towards data processing. One of the most effective areas to address is that of cooling. In their search to reduce cooling power consumption, data center operators are moving their facilities to locations where free cooling can be achieved more successfully. This is basically using uncooled air to help cool the facility. We have seen data centers located in place as far-flung such as Iceland, to benefit from the cool outdoors temperatures.
Orange has built an energy-efficient data center in Normandy that takes advantage of free cooling. It estimates that the data center uses outside air to cool the IT equipment for 10 out of 12 months in the facility. In fact, free cooling contributed to Orange saving a total of 225 GWh of power consumption in 2011.
Targeting cooling to where it is needed most can also use power more effectively. The US government reportedly used sensors to carry out environmental monitoring of its server rooms to better target cooling.
second step: cutting server power
But what of the power that the servers consume – can this be reduced? One approach is to identify the data that isn’t being used and farm that out to a lower-powered facility that is less responsive. This approach is reportedly being used by Facebook, which stores 240 billion photographs. It found that 82% of its traffic was focused on just 8% of those photos, so it created a tiered storage architecture that used less power as the photos were used less.
Work is even going on to reduce the power used by IT equipment across the board. One of the problems with IT equipment is that it still uses power sitting idle. The Green Grid is promoting “Eco Mode” for data centers that can reduce or eliminate power usage in equipment that is not actively working. Similar initiatives are also happening with network equipment such as Ethernet to reduce the power on idle.
conclusion
Of course, in all of this, there is no guarantee that power requirements will go up as predicted. A report in the New York Times found that data centers actually used less energy between 2005 and 2010 than expected, possibly driven by the global recession. But whatever the future power requirements, enterprises should look to secure their future by creating more energy-efficient data centers.
Anthony
After a Masters in Computer Science, I decided that I preferred writing about IT rather than programming. My 20-year writing career has taken me to Hong Kong and London where I've edited and written for IT, business and electronics publications. In 2002 I co-founded Futurity Media with Stewart Baines where I continue to write about a range of topics such as unified communications, cloud computing and enterprise applications.