Server virtualization: Enabling the on-demand future

Evolving Solutions –

IT professionals dream of robust networking environments that are capable of processing weekly payroll, monthly commissions, and end-of-year accounting -- A/R, A/P, General Ledger "close outs" -- while at the same time maintaining their daily ERP, CRM, and e-mail systems. Most servers, even in extreme conditions, rarely reach maximum processing power. In fact, in a typical work day environment, most servers (particularly Windows) rarely surpass a 10% utilization rate.

Fortunately for IT professionals, virtualization is making the dream a reality.

Although most companies are not taking advantage of virtual server expansion and contraction capabilities today, it is possible to "borrow" CPU and memory capacity from other servers that are not being heavily taxed. When it is no longer needed, that borrowed capacity can then be returned to its original owners in its original state. Imagine spoofing servers into thinking they have unlimited CPU and memory capacity and as a result never running into processing/workload thresholds.

Engineers at Evolving Solutions, Inc., a data disaster recovery, storage architecture, and business continuity solutions provider, predict that by the end of 2004 and into early 2005 servers that auto-monitor and auto-adjust for data-on-demand requirements will become common in larger IT shops. Servers that are able to auto-adjust to continuously changing CPU and memory needs will become as widely accepted as the current "cascading servers" methodology. More than simply a foray into virtualization, this is a complete leap into autonomic computing.

Local server virtualization

Processing power needed for multiple employees to open large files located on a single server can push CPUs and memory past pre-defined thresholds that are typically set at 70%-80%. When they exceed their thresholds, the lack of processing power drastically inhibits data and document retrieval speeds across your LANs and WANs. This often results in hard dollar costs (replacing smaller servers with larger ones or clustering existing servers) and soft dollar costs (mainly from lost employee productivity). Grow this scenario into an online transaction processing (OLTP) environment and you can imagine how rapidly costs would mount.

Take the example of Local Books, a small fictional company that sells books written by local authors from their store on Main street. The first day they launched their online shopping site, they received 30,000 hits and hundreds of attempted transactions. Because they had not effectively planned for this activity, they found their OLTP and backend database server being significantly taxed.

Wait cycles increased because the CPUs and memory were functioning constantly beyond an 80% utilization threshold. Spikes in wait times meant Web site visitors and online buyers were negatively impacted. All of this happened while their SQL, file and print, and Exchange servers were running essentially idle at less than 10% utilization.

Unfortunately, this type of scenario is fairly typical. While most organizations plan for system failure, they often forget to plan for success and system scalability. If Local Books had a plan in place to provide additional capacity on-demand when the orders came flooding in, their systems would have been ready for the onslaught, orders would not have been dropped, and their customers would not have been frustrated by long wait times.

A virtualized server environment, using products like VMware or IBM's Orchestrater, would have prevented Local Books' OLTP server from reaching the processing threshold of 70%-80%. The server would have dynamically accessed any of the available resources from the SQL, file and print, and Exchange servers to temporarily borrow processing power to complete transactions during peak ordering periods, eliminating wait times. When the capacity was no longer needed, the OLTP server would have returned the capacity back to the respective servers. Local Books' brand equity would have remained intact and a hefty profit would have been made on the opening day of the online store.

Remote server virtualization

Soon, Local Books grew to become National Books, and they had in place a plan for exponential growth. They implemented a virtualized server environment, which reduced wait times and processed more online orders than they could initially fathom. Now the National Books Web site receives millions of hits and processes tens of thousands of online transactions and book orders each day.

Without a virtualized environment, each time order processing reached its capacity, it would slow down request processing, cause time-out errors, or, worst of all, bring the Web site to a halt. Moreover, the unanticipated additional traffic on their server could have led to data corruption, lost sales, and diminished credibility of their company brand.

But because National Books chose to implement a virtualized server environment, their primary applications could share resources with other (secondary) applications, including Exchange, SQL, and SAP for example. Sales and online Web site transactions would be conducted without slowing down the network, resulting in increased in per-transaction profitability and brand awareness.

Notably, National Books could achieve all of this without adding servers each time they ran a special promotion or had an important book released. Their virtualized server environment enabled them to increase their CPU and memory resources on-demand and without having to spend additional hard dollars. And processing horsepower was guaranteed no matter how large the demand.

Server virtualization: First steps

Server virtualization and the money savings and resource sharing it offers is available today. The following three steps will get your company on the path to not only to a virtualized network environment, but to the ultimate goal of autonomic computing.

Assess & validate -- Conduct an environmental assessment to define each department's server processing needs. Deploy custom configured resource and environmental auditing agents to poll all servers to identify current totals of CPU, memory, adaptors, system capacity, and allocated and unallocated disk space (be sure to account for archive file space as it often takes up 30%-40% of all data storage). During this assessment you would also identify: CPU, memory, and adaptor usage peaks; read, write, and wait cycle peaks; and all data that has not been accessed over extended periods of time.

Rationalize & critique -- Critique your current server environment. Identify and consolidate processing-compatible applications to single servers, or you can virtualize your existing multi-server environment to share processing attributes from a common pool. (Only the second choice will aid you in the reduction of purchasing new servers for every new application.) As a result you would increase utilization of your existing servers from a typical 10%-20% to a more effective and efficient 40%-50%. More importantly, you drastically decrease unexpected outages while turning your one-to-one, limited-growth environment into a completely flexible and scalable solution -- without throwing out your existing investment.

Identify all mission critical servers. Leave those servers in a one-to-one relationship for your heavy-hitting applications, such as SAP, PeopleSoft, Siebel and large OLTP databases (such as Oracle). Then, consolidate your non-heavy-hitting applications (file and print, Exchange, SQL, etc.) and virtualize the remaining servers to form a common pool of hardware resources. Finally, configure the CPU, memory, and adaptor resource pool to be shared with the heavy hitting servers/applications whenever it is needed.

Stop investing -- Stop thinking the only solution is to buy another server. Chances are you're not taxing the existing servers you already have. Tap into your existing hardware pool and reduce the number of servers you feel you have to buy simply to increase processing capacity. Odds are high that you don't need to add a server to increase your CPU and/or memory horsepower. In fact, the typical IT environment not only may not need to add servers, but chances are it is positioned to cascade many of the existing servers and reduce the related server budget.

Autonomic computing

In the very near future, production-level servers will not only be virtualized, but will be configured for and capable of performing internal performance audits (from I/O processing needs at the CPU and memory level to page and buffer credit settings at the kernel level). They will automatically adjust and reconfigure themselves according to their immediate system needs and be able to virtually grow and contract to meet almost all on-demand needs -- all with either pre-designed human involvement at decision points or, eventually, without any human intervention at all.

Virtualizing your servers will enable them to identify their own CPU, memory, and adaptor requirements. They will reach out to idle servers and borrow capacity in order to complete immediate tasks. Then, without human prompting, these virtualized servers will return the capacity when it is no longer needed.

The ultimate goal of server virtualization is autonomic computing, that is, capacity on-demand that provides an effective roadmap for managing your information systems, regardless of size, processing demands, resource needs, time of day or night, or human availability.

Autonomic computing may not be the solution to every problem, but it certainly is a solution for most server environments.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies