In a speech opening day two of the VMworld show in Las Vegas, Herrod also described improvements to VMware's core virtual machine technology that should allow businesses to run larger, more demanding applications on virtualized servers.
VirtualCenter Management Server, the control node for VirtualCenter, today runs only on versions of Microsoft's Windows Server OS. VCenter, an updated and renamed version planned for next year, will also be available as a "virtual appliance" that runs on Linux, Herrod said.
The company is also working to bring the VirtualCenter client, which currently runs on Windows PCs, to Linux, the Mac OS and also devices like Apple's iPhone. Herrod showed only a slide photo of the iPhone interface, but it was enough to get him some applause.
VMware has been emphasizing application performance and availability throughout the show. "The focus for VMware is to make sure we can run any application at all, no matter how much performance it demands," Herrod said.
To that end VMware will increase the compute capacity its virtual machines can address next year to four CPUs and 64G bytes of RAM, from two CPUs and 4M bytes of RAM today. I/O throughput will increase to 9G bytes per second, from 300K bps today.
IT staff will be able to put up to 64 server nodes in a virtual resource pool cluster -- the pool of computers available for use in a virtual environment.
Herrod walked through VMware's plan to deliver next year a "virtual data center OS," a set of technologies for aggregating all resources in a data center, including storage and networking, and for moving virtual machines between them more easily with their policies attached.
He demonstrated VMware Fault Tolerance, which was previewed at VMware last year and is also expected in 2009. It uses what VMware calls vLockstep technology to make a constantly updated copy of a virtual machine on a different physical server.
Herrod demonstrated the technology running a one-arm bandit application (the slot machine being endemic to Las Vegas). He showed how if the primary server goes down, because someone kicks a cable or switches it off by accident, the workload switches to the remote server and the application keeps running without interruption, with the same data available to it.