Vendor claims about storage virtualization flawed

IT managers have been advised to be wary of vendor hype surrounding storage virtualization as it is a technology that is poorly defined, misunderstood and not widely used, according to Dr. Kevin McIsaac, an advisor at research firm Intelligent Business Research Services Pty Ltd. (IBRS).

Despite all the hype, McIsaac said over the next two years network based storage virtualization will remain a niche, while thin provisioning will enjoy rapid adoption in the enterprise.

And while McIsaac readily admits server virtualization is one of the best IT infrastructure trends to emerge in many years, he said the situation is very different when it comes to storage virtualization.

"This idea of being able to layer virtualization over existing storage arrays is seriously flawed," he warned.

McIsaac said a reasonable definition of storage virtualization is "the abstraction of logical storage from physical storage". However, given the sweeping nature of this definition it is not surprising that the technology creates confusion.

"The first step in understanding storage virtualization is to recognize that many of today's commonly used techniques and technologies are examples of virtualization including a file system or a storage array," McIsaac said.

Rather than thinking of it as a specific new product or feature, McIsaac said it should be thought of as a broad technique that can be deployed at any layer of the storage hardware and software stack to simplify the storage environment.

"Network based virtualization, which involves using a device in the network to provide an abstraction layer over storage arrays, is usually what vendors mean when they refer to storage virtualization," he explained.

"The idea is to layer virtualization over existing arrays to create a single storage pool, simplify management and eliminate vendor lock-in. But this idea has significant flaws."

Organizations typically moved to an external storage array, either via a SAN (storage area network) or a NAS (network-attached storage), to achieve higher utilization by sharing the same pool of spare disk across multiple servers.

McIsaac said applying network based storage virtualization to pool arrays isn't likely to improve this environment if efficient utilization hasn't already been achieved.

"Network-based storage virtualization results in a lowest common denominator view of the infrastructure, eliminating the value added features of the array; this investment in the advanced features of the storage array could be lost making it a waste of money," he said.

"Also the addition of the virtualization layer adds yet more complexity to the environment, it can introduce a performance bottleneck and add yet another potential.source of failure. And while it may eliminate vendor lock-in at the storage array it replaces it with lock-in at the virtualization layer."

McIsaac believes array-based virtualization is the best solution and the next major step forward is thin provisioning. This is a provisioning mechanism for allocating storage capacity on a just-in-time basis from a single shared capacity pool. In this approach the physical storage is only allocated when it is used, not when it is provisioned.

"Administrators typically over allocate storage because of the complexity, impact and additional work to grow capacity. For example, if an application requires 50G bytes the DBA will request 100G bytes for head room, then the storage administrator doubles that just to be sure and provisions 200G bytes," he said.

"Thin provisioning eliminates unused storage, reduces capital costs and simplifies capacity planning. While it isn't widely used at present its adoption will accelerate rapidly as vendor support widens and administrators become aware of its benefits.

"I'm predicting that over the next 18 months thin provisioning will become as pervasive as other array-based virtualization features such as RAID and snapshots."

McIsaac advised administrators to review current vendors and technologies recognizing that "some vendor lock-in is unavoidable".

He said avoid using network based storage virtualization, instead minimizing the number of array vendors deployed in the SAN or NAS.

ARD Consulting IT manager, Eric Biggsley, said when it comes to storage its all about simplicity which is why he opted for an iSCSI SAN.

"Also I was making my purchasing decision when VMWare announced it would be adding iSCSI support," he said.

"This was great news because a SAN is necessary to get the most out of virtualization, but I didn't want to do it with Fibre Channel because it was just too costly and we didn't really have the resources to support it."

Since then Biggsley has combined server virtualization with SAN technology.

"Virtualization was one of the key drivers behind the selection of an iSCSI SAN," he said.

This story, "Vendor claims about storage virtualization flawed" was originally published by Computerworld Australia.

What’s wrong? The new clean desk test
Join the discussion
Be the first to comment on this article. Our Commenting Policies