Picking the right cloud provider: tough questions and fear of failure

Good data centers go overboard to avoid failure, and to plan for it when it happens

AmazonRutlo/Flickr
Trouble at Amazon HQ?r

Network World's Jon Brodkin ably makes the point in a story today that I tried to make in a post about it last week that probably came off as sounding more like No-Duh sarcasm than an actual point:

No matter what guarantees cloud-computing service providers offer, how much better their technology is than yours or how miraculous the whole new generation of virtualized computing seems, they're still based on the same laws of physics and boxes of human-built, error-prone hardware as your data center.

OK, maybe not your data center – good data centers.

Even the most reliable data centers occasionally fail. Not completely. Not dramatically. Not lights-out, blaring klaxon, Scotty-give-me-more-power scenes of immediate peril and emergency response.

All the pieces are backed up, failovered, UPSed and monitored to within an inch of their lives.

They're also virtualized to the point that a lot of what the hardware, operating systems, apps and software you may not know about is not obvious to the kind of tools available to monitor virtualized environments.

Virtualization monitoring tools are usually made by or focuses on one vendor's hypervisor and management apps, and they're pretty good at keeping track of what's going on in their own domains, according to Forrester's James Staten.

Data centers, clouds and even big virtual infrastructures are inherently multivendor – actually they're whatever "multi" would mean if it were possible to add a zero to the end of the numerical version of it. Data centers are a mesh of systems or networks each of which are built of products from several vendors and enough software to create a computing environment that is very nearly unique.

Plug several very nearly unique environments together and you get something quantum physicists would win prizes for investigating, if they could handle the complexity.

Layer on top of that a bunch of software designed to hide all the complexity from you so you can add applications more easily, and you have a cloud.

If you want to keep those applications safe and stable, you have to do more than rely on the cloud provider.

You have to pull back the covers and be able to control all the dials and switches yourself – bandwidth, I/O, RAM, disk usage and access, virtual-server migration and failover, load balancing, the priority and resources devoted to particular services to particular applications – the whole bit.

Without that level of granular control, the tools to understand how your applications are running and what resources they're overusing, when and why, you can't keep them performing well, let alone make sure you're prepared if anything suddenly crashes, according to Patrick Kuo, the cloud guru who built the news site Daily Caller.

"Even Amazon's Web Services, which let you do a lot to define how you use their resources, come up with limitations," Kuo said. "In order to get some things to change, you have to authorize more use of resources, which you'd rather do yourself or set it up to happen automatically according to specific situations."

That requires, at minimum, using a "private cloud" service from a public-cloud vendor – meaning you're still using the same platform, but you're paying extra to make sure your apps and virtual machines run on hardware, networks and storage dedicated only to you and that you can control – very much like a co-location agreement rather than stereotypical "cloud," he said.

The cloud business, lucrative and rapidly growing as it is, poses big problems for the companies providing it, of which Amazon is only the best known – which Brodkin's story points out.

That hasn't slowed the number of companies pushing into the market with their own cloud services of various types, flavors and levels of reliability.

It has driven even technically capable companies such as Iron Mountain to back away from general-purpose cloud-computing services to focus on their own specialties.

If you're in the market for a cloud service, I'd look out for companies pitching you on services they have no history of providing, and ask some hard questions about how they can demonstrate reliability with no track record to prove it.

I'd also look at federated models that let you use more than one service provider for different functions. It's too difficult right now to split cloud-based applications among different cloud platform. It is possible to have specialty providers such as Iron Mountain to back up data or apps being hosted elsewhere.

No matter which it is and how reliable it seems, though, don't be put off by reassurances of ISO certifications or five-nines reliability or any of the other metrics of a good data center. Check beneath those covers and ask what happens to your data and apps if the cloud goes all Amazon unexpectedly.

Even the best data centers – especially the best data centers – have contingency plans to make sure they don't lose anything if all their ultrareliable systems suddenly turn belly up for a reason they never expected.

It's how you can tell a good data center from just another data center.

Top 10 Hot Internet of Things Startups
Join the discussion
Be the first to comment on this article. Our Commenting Policies