The dumbest thing VMware has done recently could help customers move to cloud

VMware modified the pricing structure that angered thousands of users, but kept the most important piece

On July 12 VMware released the newest version of its virtual-server/cloud-computing product vSphere, with a series of changes to its abilities, packaging and prices that had the potential to deliver both functional benefits and psychological ones to customers.

The whole effort to make the introduction of vSphere 5 a revolution in the way data-center people think about servers, virtual servers and cloud hit a wall because VMware flubbed the pricing.

[VMware licensing change opens doors for competing providers and VMware preparing data loss prevention features for vShield]

The flubbing was so ham-handed and the reaction of users so angry that it drove previously loyal customers to look seriously at competing products for the first time in years. They discovered they could accomplish a lot of the things companies were already doing with the more-costly VMware products, according to Bernd Harzog analyst covering Virtualization Performance and Capacity Management at Virtualization Practice.com.

"Competitors, not just Microsoft and Citrix, but some of the smaller players in cloud, too, have come a long way from the time VMware was the only real option for enterprises," he said.

Prices change the way users think about Cloud

That revelation in the minds of customers may have a greater (and more negative) long-term impact on the thinking of VMware customers than the change VMware intended the price plans to create – encouragement to think of data-center resources as fungible, mobile and redistributable in the model of external cloud platforms, not the hidebound traditional processes of most data centers.

VMware's previous pricing model was also designed for psychological impact. In that case it was designed to help customers think about servers as just the concrete slab on which they could build the virtual machines of their dreams – using compute resources more efficiently, reducing time-to-deployment and generally saving a ton of money and making end users happy at the same time.

Prices were based on the number of processors in the server, not the number of VMs that server could support.

That let customers who were expanding their virtual-server installations – and saving money by getting rid of a lot of aging, support-intensive hardware – replace them with cheap servers loaded with so much memory they were top heavy.

The memory-is-free approach from VMware meant that every additional virtual machine was also essentially free.

That no-cost factor was a major contributor to the growth of both VMware and virtual servers in the enterprise – many of which have struggled with flat or shrinking it budgets during the past few years and who needed to get as much power as possible out of the hardware they could afford.

Many large VMware shops have built huge virtual infrastructures based on that model of server and outsize proportion of RAM.

Shooting for a change to cloud-y thinking (and missing)

With vSphere 5, VMware changed gears and began charging for the amount of memory customers assigned to a VM rather than the number of processors on the physical server in which it lived.

That seemed like extortion to many, who spent days screaming about it in user forums, then weeks coming up with ways to minimize the costs anyway, or switch to competing products.

You can't flip a switch and change the economics of the technology on which a lot of the application development and IT productivity increases of the past few years has been based and not face some backlash.

VMware spokespeople brushed off the anger as being based on the same kind of surprise Facebook users express when the interface changes. "They're not going to go away because of this," one told me.

Yes they would, if they could, according to Bernd Harzog, head virtualization and cloud guru and consulting firm the Virtualization Practice.

"A lot of people who were inspired by pricing [from VMware] to check out the competition found out it was able to do the vast majority of the things they already had vSphere doing for them," Harzog said.

"Is VMware Killing Itself," asked investment site TheMotleyFool, before concluding that it wasn't; the flap and the potential increase in costs doesn't compare to the PR disaster Netflix suffered the same week by deciding to raise prices as much as 60 percent, according to the Fool's analysis.

The thing is, vSphere 5 prices were designed to do the same thing as the free-VM pricing models did: get customers to think about their computing needs differently.

Rather than basing all their decisions on the number of physical servers in the place and number of apps they could run on each (generally just one), VMware's virtual machines let them think of apps as separate workloads that could be stacked several to a server to use computing power more efficiently and spread across the enterprise to be close to the people who actually needed them.

vSphere 5 prices urged them to take that thinking a step further and think of all a data-center resources as one big pool that could be divvied out according to the needs of individual applications.

The basic pricing unit in the new plan is vRAM (virtual RAM). Each license gets you the right to use a certain amount of RAM, and adds costs when you go beyond that lowish limit to improve performance in one VM by adding more memory.

License costs rise or fall according to the amount of RAM is assigned to an app, but all the memory attached to all the vSphere 5 servers in the place can be included as part of the calculation, so no one is stuck just dividing up the RAM in one server or cluster.

That's a great way to get people thinking in terms of cloud computing – in which all computing resources are supposed to be accessible from any part of the enterprise, not locked up in one server or server cluster.

It's not great if it makes such a big change in the way customers spend money with you that they never get to the point of realizing the long-term benefit of "thinking cloud" because overspending their virtual-server budget means they'll be "thinking unemployment" long before they get any benefit from the fog.

The prices themselves weren't so different.

The new terms specifically punished users with extra cost for VMware traditions many counted on. There was limit on how much vRAM could be assigned under one license, so apps with ultra-high RAM requirements (some have as much as a terabyte) would require sometimes a dozen or more licenses.

Capacity plans that relied on the ability to launch extra VMs to cover usage spikes or for test/dev teams, would also be punished because the memory required for those apps meant they were no longer free.

They also punished those with long-term plans for virtualization or cloud projects that would rely on more powerful servers that could support a far higher amount of RAM than current models.

Barring any changes to existing VM infrastructures, pricing under the two plans didn't product huge bottom-line differences.

Under license requirements for vSphere 4.1, users paid according to total number of cores in the CPUs in the host server. A server running vSphere 4.1 Standard on servers with 1 CPU with 8 cores and 24 GB of RAM would cost $795, compared to $995 with vSphere 5.

vSphere 5 licenses are based on "vRAM:" the total amount of memory dedicated to each virtual machine, not the limits of the physical server on which they run, according to Tim Stephan, senior director of product marketing at VMware.

A vSphere 5 Standard Edition server includes 24 GB of RAM and costs $995 for the basic license. (Click here for a PDF of VMware vSphere 5's full pricing model).

VMware caves on prices, still promotes fog-first thinking

After weeks of stink and anger, VMware agreed to modify the terms of vSphere 5 prices to make the whole conversion much less expensive and the change much less shocking.

It expanded the amount of vRAM available to each edition of vSphere between 50 percent and 100 percent.

It also capped the amount of RAM for which it would charge on any single virtual machine so no enterprise app would require more than one license.

It also allowed customers to calculate vRAM usage according to a 12-month average rather than according to the peak usage for the year.

That last concession saved, all by itself, a major business use case for both cloud and virtualized computing: the ability to expand or contract the volume of computing services you use to match the amount of demand, and pay only for what you use.

Without that ability, customers would be stuck with physical-server limitations that force them to pay for enough power to cover their absolute highest peak of demand for the year (plus 10 percent for insurance) rather than averaging out both the demand and amount of hardware they would otherwise have to buy.

The changes should save customers some money. More importantly, from their perspective, is that the new plan makes it much easier to gradually change their VMware infrastructures to adapt to the new pricing (and the psychological outlook of the full-time cloud computing users) rather than force them to adapt more quickly than their budgets, peace of mind or bosses would allow.

It will still cost more, and will still require some big changes in the way they plan their hardware purchases and distribution of resources for hardware supporting virtual-server infrastructures, let alone full-blown cloud services.

vRAM, which is still the basis for pricing, will gradually change the way data centers approach both capacity planning and allocation of computing resources, making both much more similar to the way things are done in large clouds than in smaller data centers.

Changes like that – far more practical and bounded by specific configurations of servers, networks, application requirements and performance goals – don't happen quickly.

With the plethora of competing virtual-server and PAAS vendors eager to take advantage of VMware user dissatisfaction, there's a good change many VMware users will have begun including competing products into their previously homogeneous virtual infrastructures as a way to save license costs.

Analysts have been predicting for three years that enterprise customers would begin building Hyper-V-based clusters into VMware virtual infrastructures, but real-world examples until now have been relatively few.

It will also change the way enterprises buy hardware.

Watch over the next 12 months for a shift toward higher-powered servers and a drop in sales of memory to run in lower-end hardware.

Look also for a boost in virtual-server business for Citrix, which not only sells virtual-desktop systems far more adaptable than VMware's, it sells them without the penalty vSphere license costs would put on the memory-intensive requirement on servers running VDI from the data center.

It probably won't shift users en masse from VMware to other vendors, even external cloud vendors.

Instead it will push them toward standard hardware configurations that match much more closely the best-practice recommendations VMware has been putting out all along, and toward the kind of management tools VMware sells to control VMware lifecycle, resource use and to limit sprawl.

If all your VMs are free, VM sprawl only wastes a little disk space or some processing capacity here or there.

If you pay by the vRAM gigabyte, every extra, unclaimed VM floating around your enterprise costs money you don't have to spend, so it's worth your while to buy tools to find and kill them.

Which also helps make more efficient use of your existing hardware, increases your level of security, reduces your support costs, makes data-access audits simpler and keeps the rest of the VM infrastructure running more cleanly because processors don't have to waste effort on dead wood.

All those things are good. So is thinking about data center resources as if they're part of a cloud, as long as doing it makes more sense than the opposite, and you can break what have become old habits of buying to support a VM habit and shift to rules of procurement that promote migration to and efficient use of the fog rather than encouraging you to stick with whatever stage of virtualization you've already reached.

Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies