Five Lessons from a Data Center's Crisis of Capacity

By Robert Lemos, CIO |  Data Center, energy consumption, Servers

In 2005, problems in the data center at Pacific Northwest National Laboratory came to a head.

Unscheduled outages were occurring almost monthly, bringing down the data center for hours at a time. Groups were buying an increasing number of rack-mounted servers - which had recently become cheaper at the time - to boost the computing resources, says Ralph Wescott, data center services manager for the government laboratory, which is managed by the U.S. Department of Energy. In July, 2005, the server room had reached its capacity limit.

"Groups would go buy a server and throw it over the wall to me, saying, 'Hey, install this,'" Wescott says. "But I didn't have any space, power or cooling (capacity) left. If I installed (one more), the whole room would go dark."

[ For timely data center news and expert advice on data center strategy, see CIO.com's Data Center Drilldown section. ]

Wescott and PNNL embarked on a broad project to revamp their data center without breaking the budget. Every quarter for three years, the data center group spent a weekend shutting down the server room and replacing a row of old servers and tangled network cables under the floor with more efficient, yet more powerful servers connected by fewer cables running in the ceiling. The new configuration allowed for more efficient cooling under the floor.

The result? PNNL moved from 500 applications on 500 servers to 800 applications running on 150 servers.

During a tight economy, tackling such information-technology projects require a tight grip on the purse strings, says Joseph Pucciarelli, the program director of technology, financial and executive strategies for analyst firm IDC, a sister company to CIO.com.

"The situation is a very common one," he says. "Companies are making just-in-time investments. They have a problem, and they are looking at the problem in a constrained way."

Here are some lessons PNNL learned in bringing their data center back from the brink.

1. Plan, don't react The first problem Wescott needed to solve was the data center group's habit of reacting to each small problem as it arose, rather than seeing the systematic issues and creating a plan to create a sustainable service. In addition to the 500 servers, the data center had some 33,000 cables connecting those servers to power, networking and security systems.

"We decided what the data center should look like and what its capacity should be," he says.

The group concluded that the current trajectory would result in 3,000 applications, each running on its own server, in 10 years. Now, the data center has 81 percent of applications virtualized - and average of 17 per server - and Wescott plans to reach the 90 percent mark.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Data CenterWhite Papers & Webcasts

Webcast On Demand

Cloud Knowledge Vault

Sponsor: HP and Intel®

See more White Papers | Webcasts

Answers - Powered by ITworld

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question