VMware one-ups Microsoft with vSphere 5.1

By , Network World |  Virtualization, VMware, vSphere

The VCSA features are updated from the 5.0 version and offer more configuration options, especially authentication and database options. We could use an internal database to keep track of settings and configurations, or use an external database (Oracle was recommended). MS SQL Server can't be used, and we wondered why an open source database product wasn't offered for embedded tracking. The appliance is based on SUSE Linux 11, and it uses 4GB of memory and 8GB of disk. A non-monstrous installation ought to be more easily tracked with an internal LAMP-ish database product.

The VCSA can stand alone, or be synced with others in "Linked Mode,'' which requires authentication through Active Directory, and allows inventory views in a single group. Linked Mode VCSAs can't have vMotion migrations, however, which frustrated us.

Moving needles between haystacks

In the old model, VMware's vMotion allowed moving VMs, hot/live, between hosts if the hosts shared the same storage. VMware's Storage vMotion removes the limitation of requiring the same storage -- if other small constraints are respected, including the maximum number of concurrent vMotions of any type that can be handled. VMotions aren't encrypted, however, and so VMware recommends (and we agree) that Storage vMotions (and normal migrations) need to be in wire-secure environments.

The maximum number of concurrent migrations is often a function of network traffic capability. We could bond several 10GB ports together to maximize transfer and minimize downtime of hot/alive VMs, but on a network with congestion, or networks using VLANs, things could slow as VMs are tossed around. There are also limitations imposed on data stores that can be manipulated -- a function of the version of ESX or ESXi in play.

Using a Gigabit Ethernet network, Storage vMotion of a sample Windows 2008R2 VM took 11 minutes with two bonded 10G Ethernet ports and a back-channel connection. Linking all three available ports actually slowed things down (16 minutes), as the back channel seems to be necessary for traffic management during v-movements. But we had finally proven the concept. Numerous bonded 10G Ethernet ports would have likely shortened the process of live migration more quickly.

We moved a VM from the lab location through the Internet, to our cabinet at nFrame. Our local network connection is variably throttled by Comcast, and so we won't quote overall migration time. Let's just say it was a very long time. Nonetheless, it worked.


Originally published on Network World |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness