August 16, 2011, 6:41 AM — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
A private cloud architecture leverages the power of end-to-end virtualization so workloads can be fluidly distributed among a pool of servers, but this ideal cannot be achieved with traditional network infrastructure.
Conventional server I/O is costly, complex and inflexible, and results in applications being effectively locked to specific groups of servers. A fresh approach to infrastructure is required, one that leverages virtual I/O technologies.
ANALYSIS: Four trends shape the new data center
Two hardware layers comprise the foundation of the private cloud: virtualized servers and a virtualized infrastructure. Server virtualization lets you run any application on any server. Virtual infrastructure lets you flexibly link those servers to whatever network and storage resources are required.
Think of it as universal, any-to-any connectivity. Any server, regardless of vendor, can be connected to any data center resource -- regardless of interface type -- and that connectivity can be managed in real-time on live servers.
Traditional server I/O was not designed for this. It was designed to meet the needs of static servers configured with the I/O necessary to meet that device's specific application requirements within the three-tier data center model.
Private clouds completely change the traditional deployment model, thus creating the need for a new infrastructure model. The objective of a private cloud is to create a dynamic pool of compute resources that can be deployed as needed: any application on any server, and any collection of servers assigned to any group of users. Think of it as dynamically configured virtual data centers within the data center. But to achieve this objective, servers need software-configurable connections to all networks and storage.