March 07, 2011, 12:29 PM — While virtual servers have proven a boon in the data center, they don't address the challenge of incrementally adding server capacity and automatically distributing load across them. As a result, the responsiveness and availability of a highly utilized Web application, such as Microsoft SharePoint, can deteriorate when the virtual machine it runs on is out of capacity. Next-generation application delivery controllers (ADCs) not only address this challenge, they interoperate with virtualization tools to provide greater control and even make it possible to automatically deploy server resources based on real-time demand.
Virtualization ignores the reality that a given physical server has a fixed performance capacity. The result of virtual machines (VMs) sharing resources means spikes in any one virtual server's utilization can have an adverse impact on all the other virtual servers running on the same hardware. For example, if a virtual server running a database application has a spike of queries, any virtual server on the same hardware may be unable to deliver adequate performance due to the increase in processor load.
Perhaps the most frequently misunderstood aspect of virtualization with respect to quality of service management is the hypervisor's lack of application awareness. While virtualization management tools are able to monitor and control the operating systems they host, the same is not true for the applications running on those guest operating systems. Virtualization environments are blind to failures or bottlenecks at the application layer, which means that, although virtualization infrastructure may consider a guest machine to be healthy according to operating system metrics, the applications running on that server may be unresponsive.
Scaling applications without having to change the application requires server load balancing, where advanced ADCs intelligently distribute end-user requests across multiple servers; from the end-user's perspective, there is only one server.