Gartner warns: Cloud users, save yourselves
Need for disaster prep doesn't stop just because you run on someone else's cloud
Cloud computing is drastically changing not only the market for IT services, but the requirements and assumptions CIOs make about how to keep their companies out of trouble in case of disaster, according to a new report from Gartner and nearly anyone paying attention to the Amazon EC2 outage last month.
The IT services market that includes external cloud providers is worth $820 billion per year and growing fast as more and more applications and data-center functions like storage migrate or simply expand into the cloud.
The hype and the promise of flexible capacity, low costs and top-quality services overcome the natural caution of many – some even from among CIOs, normally a tribe of deeply cynical pessimists – according to Frank Ridder, a research VP at Gartner who published a report yesterday warning that not all about the cloud is light and fluffy.
"Cloud service sourcing is immature and fraught with potential hazards," Ridder wrote. "Cloud computing is driving discontinuity that introduces exciting opportunities and costly challenges. Organizations need to ...develop realistic cloud sourcing strategies and contracts that can reduce risk."
I couldn't agree more.
Neither could Victor Janulitis, founder of IT consulting and employment management company Janco Associates, Inc.
"No matter whether you call it a cloud or something else, a process has to run on a computer and the computer has to sit somewhere, usually in a data center, and that has to be in a location where it's secure and stable and that has a complete disaster recovery plan in place in case something does happen," he said. "If you go into a cloud environment and don't have a good understanding of what it is or what could happen, what's going to happen isn't good."
That's what happened to Amazon customers Reddit, HootSuite, Foursquare and others. Some were out for days, others won't be getting back the data that was unavailable during the outage.
That was an anomaly.
The 40 largest cloud providers had a pretty decent uptime rating during the past 12 months, Computerworld's Patrick Thibodeau reports.
According to services-tracking company AppNeta, the worst of the 40 cloud services were down for 7 hours during the past year. The best ones were down for 3 minutes. The average was 4.6 hours.
Missing seven hours worth of anything out of the 61,320 in a year isn't bad.
Thibodeau quotes Ken Brill, executive director of the Uptime Institute (you can tell he's picky just from the name of his company, can't you?) as pointing out that the Fukushima Nuclear Power Plant had almost 100 percent availability for more than 40 years, which is also pretty decent.
Then it was wiped out by a tsunami and a series of explosions, fires and radiation leakage that even scared off some of the robots sent to try to scope out the disaster.
"You have to realize that things happen, things go wrong, and you have to be ready to deal with it," Janulitis said, not talking about nuclear power plants.
"One of the fallacies about the public cloud is that it's so safe," Janulitis said. "Public cloud has a huge risk associated with it because if it goes down you have no control over its recovery. You can't run your own recovery plan the way you would with your own data center.
"You just have to wait."