April 27, 2011, 11:46 AM —
No matter what guarantees cloud-computing service providers offer, how much better their technology is than yours or how miraculous the whole new generation of virtualized computing seems, they're still based on the same laws of physics and boxes of human-built, error-prone hardware as your data center.
OK, maybe not your data center – good data centers.
Even the most reliable data centers occasionally fail. Not completely. Not dramatically. Not lights-out, blaring klaxon, Scotty-give-me-more-power scenes of immediate peril and emergency response.
All the pieces are backed up, failovered, UPSed and monitored to within an inch of their lives.
They're also virtualized to the point that a lot of what the hardware, operating systems, apps and software you may not know about is not obvious to the kind of tools available to monitor virtualized environments.
Virtualization monitoring tools are usually made by or focuses on one vendor's hypervisor and management apps, and they're pretty good at keeping track of what's going on in their own domains, according to Forrester's James Staten.
Data centers, clouds and even big virtual infrastructures are inherently multivendor – actually they're whatever "multi" would mean if it were possible to add a zero to the end of the numerical version of it. Data centers are a mesh of systems or networks each of which are built of products from several vendors and enough software to create a computing environment that is very nearly unique.
Plug several very nearly unique environments together and you get something quantum physicists would win prizes for investigating, if they could handle the complexity.
Layer on top of that a bunch of software designed to hide all the complexity from you so you can add applications more easily, and you have a cloud.
If you want to keep those applications safe and stable, you have to do more than rely on the cloud provider.