Data center density hits the wall


Industrial Light & Magic has been replacing its servers with the hottest new IBM BladeCenters -- literally, the hottest.

For every new rack ILM brings in, it cuts overall power use in the data center by a whopping 140 kW -- a staggering 84% drop in overall energy use.

But power density in the new racks is much higher: Each consumes 28 kW of electricity, versus 24 kW for the previous generation. Every watt of power consumed is transformed into heat that must be removed from each rack -- and from the data center.

The new racks are equipped with 84 server blades, each with two quad-core processors and 32GB of RAM. They are powerful enough to displace seven racks of older BladeCenter servers that the special effects company purchased about three years ago for its image-processing farm.

To cool each 42U rack, ILM's air conditioning system must remove more heat than would be produced by nine household ovens running at the highest temperature setting. This is the power density of the new infrastructure that ILM is slowly building out across its raised floor.

These days, most new data centers have been designed to support an average density of 100 to 200 watts per square foot, and the typical cabinet is about 4 kW, says Peter Gross, vice president and general manager of HP Critical Facilities Services. A data center designed for 200 W per square foot can support an average rack density of about 5 kW. With carefully engineered airflow optimizations, a room air conditioning system can support some racks at up to 25 kW, he says.

Temperatures rising

Maximum operating temperatures for data center gear

Before 2004: 72 degrees F

2004: 77

2008: 81

Source: ASHRAE Technical Committee 9.9

At 28 kW per rack, ILM is at the upper limit of what can be cooled with today's computer room air conditioning systems, says Roger Schmidt, IBM fellow and chief engineer for data center efficiency. "You're hitting the extreme at 30 kW. It would be a struggle to go a whole lot further," he says.

[Read our related story, "Why data center temperatures have moderated."]

The sustainability question

The question is, what happens next? "In the future are watts going up so high that clients can't put that box anywhere in their data centers and cope with the power and cooling? We're wrestling with that now," Schmidt says. The future of high-density computing beyond 30 kW will have to rely on water-based cooling, he says. But data center economics may make it cheaper for many organizations to spread out servers rather than concentrate them in racks with ever-higher energy densities, other experts say.

Energy-efficiency tips

Refresh your servers. Each new generation of servers delivers more processing power per square foot -- and per unit of power consumed. For every new BladeCenter rack Industrial Light & Magic is installing, it has been able to retire seven racks of older blade technology. Total power savings: 140 kW.

Charge users for power, not just space. "You can be more efficient if you're getting a power consumption model along with square-footage cost," says Ian Patterson, CIO at Scottrade.

Use hot aisle/cold aisle designs. Good designs, including careful placement of perforated tiles to focus airflows, can help data centers keep cabinets cooler and turn the thermostat up.

Kevin Clark, director of information technologies at ILM, likes the gains in processing power and energy efficiency he has achieved with the new BladeCenters, which have followed industry trends to deliver more bang for the buck. According to IDC, the average server price since 2004 has dropped 18%, while the cost per core has dropped by 70%, to $715. But Clark wonders whether doubling compute density again, as he has in the past, is sustainable. "If you double the density on our current infrastructure, from a cooling perspective, it's going to be difficult to manage," he says.

He's not the only one expressing concerns. For more than 40 years, the computer industry's business model has been built on the rock-solid assumption that Moore's Law would continue to double compute density every two years into perpetuity. Now some engineers and data center designers have begun to question whether that's feasible -- and whether a threshold has been reached.

The threshold isn't just about whether chip makers can overcome the technical challenges of packing transistors even more densely than today's 45nm technology allows, but whether it will be economical to run large numbers of extremely high-density server racks in modern data centers. The newest equipment concentrates more power into a smaller footprint on the raised floor, but the electromechanical infrastructure needed to support every square foot of high-density compute space -- from cooling systems to power distribution equipment, UPSs and generators -- is getting proportionally larger.

Data center managers are taking notice. According to a 2009 IDC survey of 1,000 IT sites, 21% ranked power and cooling as the No. 1 data center challenge. Nearly half (43%) reported increased operational costs, and one-third had experienced server downtime as a direct result of power and cooling issues.

Christian Belady is the lead infrastructure architect for Microsoft's Global Foundation Services group, which designed and operates the company's newest data center in Quincy, Wash. He says the cost per square foot of a raised floor is too high. In the Quincy data center, he says, those costs accounted for 82% of the total project.

The case for, and against, running data centers hotter

Raising the operating temperature of servers and other data center gear doesn't always save on cooling costs. Most IT manufacturers increase fan speeds for servers and other equipment as temperatures exceed about 77 degrees F to keep the processor and other component temperatures constant, says IBM fellow Roger Schmidt. At temperatures above 77 degrees, the speed of fans in most servers sold today increases significantly and processors suffer higher leakage currents.

Power consumption increases as the cube of the fan speed -- so if speed increases by 10%, that means a 33% increase in power. At temperatures above 81 F, data center managers may think they're saving energy when in fact servers are increasing power usage at a faster rate than what is saved in the rest of the data center infrastructure.

Bottom line: You would still save energy overall if you raised the temperature to 81, but going higher presents challenges to systems and component designers. Could equipment be designed to operate at higher temperatures? Possibly, Schmidt says. "Manufacturers will have to come together as a group to determine whether we should recommend a higher limit that will, in fact, save energy at the data center level."

Tom Bradicich, an IBM vice president for architecture and technology for the company's x86 servers, says that with all of the different equipment in a data center, getting the facility optimized for 81 degrees is difficult. Even getting the components in the boxes IBM builds to meet the current spec can be a challenge. "We're working in a world where we integrate a lot of third-party components. At the end of the day, IBM doesn't make the microprocessor and other components."

Dyan Larson, director of data center technology initiatives at Intel, thinks the day when everything in a data center can run safely at 81 degrees is still a long ways off. "There's a reliability concern people have when it comes to running data centers at higher temperatures. Until the industry says, 'We're going to warranty these things for higher temperatures,' we're not going to get there."

"We're beyond the point where more density is better," Belady says. "The minute you double compute density, you double the footprint in the back room."

HP's Gross has designed large data centers for both enterprises and Internet-based businesses like Google's or Yahoo's. Internet-based data centers consist of large farms of Web servers and associated equipment. Gross thinks Belady's costs are about average. Electromechanical infrastructure typically makes up about 80% of the cost of a new Tier 4 enterprisedata center's cost, regardless of the size of the facility. That number is generally 65% to 70% for Internet-based data centers, he says. Those numbers haven't increased much as power densities have increased in recent years, he adds.

As compute density per square foot increases, overall electromechanical costs tend to stay about the same, Gross says. But because power density also increases, the ratio of electromechanical floor space needed to support a square foot of high-density compute floor space also goes up.

IBM's Schmidt says the cost per watt, not the cost per square foot, remains the biggest construction cost for new data centers. "Do you hit a power wall down the road where you can't keep going up this steep slope? The total cost of ownership is the bottom line here," he says. Those costs have for the first time pushed some large data center construction projects past the $1 billion mark. "The C suites that hear these numbers get scared to death because the cost is exorbitant," he says.

Ever-higher energy densities are "not sustainable from an energy use or cost perspective, says Rakesh Kumar, analyst at Gartner Inc. Fortunately, most enterprises still have a ways to go before they see average per-rack loads in the same range as ILM's. Some 40% of Gartner's enterprise customers are pushing beyond the 8 to 10 kW per rack range, and some are as high as 12 to 15 kW per rack. However, those numbers continue to creep up.

In response, some enterprise data centers, and managed services providers like Terremark Inc., are starting to monitor power use and factor it into what they charge for data center space. "We're moving toward a power model for larger customers," says Ben Stewart, senior vice president of engineering at Terremark. "You tell us how much power, and we'll tell you how much space we'll give you."

But is it realistic to expect customers to know not just how much equipment they need hosted but how much power will be needed for each rack of equipment?

"For some customers, it is very realistic," Stewart says, In fact, Terremark is moving in this direction in response to customer demand. "Many of them are coming to us with a maximum-kilowatt order and let us lay the space out for them," he says. If a customer doesn't know what its energy needs per cabinet will be, Terremark sells power per "whip," or power cable feed to each cabinet.

Containment: The last frontier

IBM's Schmidt thinks further power-density increases are possible, but the methods by which data centers cool those racks will need to change.

More energy-efficiency tips

Look for the most efficiently designed servers. Hardware that meets the EPA's Energy Star specification offers features such as power management, energy-saving power supplies and variable-speed cooling fans. The upfront price may be slightly higher but is typically offset by lower operating costs over the product's life cycle.

Consider cold-aisle containment. Once you have a hot aisle/cold aisle design, the next step for cabinets exceeding about 4 kW is to use cold-aisle containment techniques to keep high-density server cabinets cool. This may involve closing off the ends of aisles with doors, using ducting to target cold air and installing barriers atop rows to prevent hot air from circulating over the tops of racks.

Use variable-speed fans. Computer room air conditioning systems rely on fans, or air handlers, to push cold air in and remove hot air from the space. A reduction in fan speed of 12.5% cuts power use in half.

ILM's data center, completed in 2005, was designed to support an average load of 200 W per square foot. The design has plenty of power and cooling capacity overall. It just doesn't have a method for efficiently cooling high-density racks.

ILM uses a hot aisle/cold aisle design, and the staff has successfully adjusted the number and position of perforated tiles in the cold aisles to optimize airflow around the carefully sealed BladeCenter racks. But to avoid hot spots, the room air conditioning system is cooling the entire 13,500-square-foot raised floor space to a chilly 65 degrees.

Clark knows it's inefficient; today's IT equipment is designed to run at temperatures as high as 81, so he's looking at a technique called cold-aisle containment.

IBM's Roger Schmidt says the cost per watt, not the cost per square foot, remains the biggest construction cost for new data centers.

Other data centers are already experimenting with containment -- high-density zones on the floor where doors seal off the ends of either the hot or cold aisles. Barriers may also be placed along the top of each row of cabinets to prevent hot and cold air from mixing near the ceiling. In other cases, cold air may be routed directly into the bottom of each cabinet, pushed up to the top and funneled into the return-air space in the ceiling plenum, creating a closed-loop system that doesn't mix with room air at all. "The hot/cold aisle approach is traditional but not optimal," says Rocky Bonecutter, data center technology and operations manager at Accenture. "The move now is to go to containment."

1 2 Page
Free Course: JavaScript: The Good Parts
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies