August 22, 2011, 7:00 AM — Many data centers are up against the maximum electric power available to them from their utility. Others are facing management challenges: the amount of time to deploy new capacity, and to manage existing capacity and systems. And gains made by virtualizing and consolidating servers are often lost again as more gear is added in.
The demand for more CPU cycles and petabytes of storage won't go away. Nor will budget concerns, or the cost of power, cooling and space.
Here's a look at how vendors, industry groups, and savvy IT and Facilities planners are meeting those challenges -- plus a few ideas that may still be a little blue-sky.
Location, location, location
Data centers need power. Lots of it, and at a favorable price.
Data centers also need cooling, since all that electricity going to and through IT gear eventually turns into heat. Typically, this cooling requires yet more electrical power. One measure of a data center's power efficiency is its PUE -- Power Usage Effectiveness -- which is the ratio of total power consumed by the facility for IT, cooling, lighting, etc., divided by the power consumed by IT gear. The best PUE is as close as possible to 1.0; PUE ratings of 2.0 are, sadly, all too typical.
"You want to be in a cool dry geography with cheap power, like parts of the Pacific NorthWest. For example, FaceBook's data center in Prineville, Oregon. Or in a very dry place, where you can get very efficient evaporative cooling," says Rich Fichera, VP and Principal Analyst, Infrastructure and Operations Group, Forrester Research.
[ See also: Facebook shares its data-center secrets ]
Companies like Apple, Google and Microsoft, along with data center hosting companies, have been sussing out sites that meet affordable power and cooling criteria (along with not being prone to earthquakes or dangerous weather extremes, available and affordable real estate, good network connectivity, and good places to eat lunch).