Data center fabrics catching on, slowly

It takes some planning -- and expense -- to revamp switching gear at the enterprise level.

By Esther Shein, Computerworld |  Data Center

Previously, when they were using HP technology, Falzarano recalls, one of their database nodes went down, which required getting the vendor on the phone and eventually taking out three of the four CPUs and going through a troubleshooting process that took four hours. By the time they got the part they needed, installed it and returned to normal operations, 14 hours had passed, says Falzarano.

Walz Group found its data center couldn't scale to grow as quickly as the business needed to serve clients, says Chief Information Security Officer Bart Falzarano. But there's been a dramatic change after the company installed a fabric.

"Now, for the same [type of failure], if we get a degraded blade server node, we un-associate that SQL application and re-associate the SQL app in about four minutes. And you can do the same for a hypervisor," he says.

IT has been tracking the data center performance and benchmarking some of the key metrics, and Falzarano reports that they immediately saw a poor-density reduction of 8 to 1, meaning less cabling complexity and fewer required cables. Where IT previously saw a low virtualization efficiency of 4 to 1 with the earlier technology, Falzarano says that's now greater than 15 to 1, and the team can virtualize apps that it couldn't before.

Other findings include a rack reduction of greater than 50% due to the amount of virtualization the IT team was able to achieve; more centralized systems management -- now one IT engineer handles 50 systems -- and what Falzarano refers to as "system mean time before failure."

"We were experiencing a large amount of hardware failures with our past technology; one to two failures every 30 days across our multiple data centers. Now we are experiencing less than one failure per year," he says.

(Next: Easy to implement)

Case study: Fabrics at work

When he used to look around his data center, all Dan Shipley would see was "a spaghetti mess" of cables and switches that were expensive to manage and error-prone. Shipley, architect at $600 million Supplies Network, a St. Louis-based wholesaler of office products, says the company had all the typical issues associated with a traditional infrastructure: some 300 servers that consumed a lot of power, took up a lot of space and experienced downtime due to hardware maintenance.

Originally published on Computerworld |  Click here to read the original story.
Join us:






Data CenterWhite Papers & Webcasts

Webcast On Demand

Cloud Knowledge Vault

Sponsor: HP and Intel®

See more White Papers | Webcasts

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Ask a Question