November 12, 2008, 3:20 PM — I attended a Green Data Center Tradeshow in Wilmington, Delaware this week. The half-day trade show -- hosted by Cisco, EMC, vmware, APC and Panduit -- was very focused and fast-paced and gave me a lot to think about. Data centers all over the country are experiencing many of the same problems that I've experienced myself: UPS devices getting close to 100% capacity, air conditioning systems needing to be augmented and densely packed racks dangerously warming up even when the overall temperature in the rooms is so chilly that no one can work in them without a heavy sweater. If we're all to avert the looming data center crisis, there are many things we should be thinking about today.
Why do I use the word "crisis"? Because a large percentage of data centers today can barely get enough power to accommodate their required growth. In 2006, Gartner predicted that 50% of data centers would not have sufficient power by 2008 -- and they weren't far off. Data center upgrades in urban areas are being delayed because additional power is often not available. Will we be able to avoid the 96% deficiency predicted for 2011? Maybe. Maybe not.
Take, for example, the question of why we're having green data center discussions at all. It's only in small part due to the climate crisis and more directly due to increasing power bills. With the increasing density of systems in our computer racks, we have more systems to cool, more heat per system and particular "hot spots" in our data centers. When we add cooling to an entire data center to compensate for the additional heat in a few racks, we dramatically increase the cost of the cooling. In fact, cooling is now the dominant consumer of power in the average data center.
Just think about it. We are 1) using more power per server -- roughly 4 times as much as we used five years ago, 2) putting twice as many servers in the average rack, maybe more, 3) significantly adding to our cooling requirement and 4) watching energy costs rising. We are also putting more data online every year on larger faster disks. The concomitant increase in power and cooling costs is so large that it, over its life span of a system, is roughly equivalent to the system's purchase price.
What are the reasons? For one thing, we have more densely packed racks because high end servers are much smaller than they used to be. A server replacing a 4U or 8U system might be only 1U or 2U. For another, we have faster disks. No surprise to you physicists out there, but the faster a drive spins, the more heat it creates. Spinning drives actually represent a major part of the power consumed by a system. After all, other than fans, they are the only moving parts in a server and disks that revolve 10,000 or 15,000 times a minute can generate enough heat to melt their own platters.
Some of the suggestions presented at the show included the "cold row/hot row" method of arranging racks. Why should the hot air exhausted from one row of racks become the inhaled air for the next? Instead, with racks in alternate rows turned 180 degrees, the hot air exhausted in every other row can be sucked out of the room and all systems can be pulling in cooler air. We also examined an "in row" cooling system that cools individual racks, not the entire room. What a downright sensible idea!
Of course, there's much more to consider with respect to virtualization and consolidation and how we will be able to continue monitoring our data centers when virtual systems move "automagically" across our hardware, but the technology is converging just in time to help us avert a data center crisis.
I also got a pen that lights up! Now how cool is that?