March 30, 2009, 11:47 AM — I heard someone assert a week or so ago that Gartner Group had stated that the recession is going to be bad for cloud computing. Since this seems counter-intuitive (because offerings that are, at least, putatively less expensive should be more attractive in difficult economic times), I decided to do some searching to find the statement and understand the context. What I found was ... nothing. No evidence that someone from Gartner put forth that opinion.
Nevertheless, there is something to be examined in this question: is the recession good or bad for cloud computing? Where you come down on that question depends a great deal on how you think most IT organizations will consume cloud services. By cloud services, I refer to the characteristics of cloud computing (a good definition is presented by the UC Berkeley RAD Lab, which was discussed in this blog posting); in other words, how will IT organizations achieve an infrastructure that scales easily, can be reconfigured in minutes rather than weeks, and has a transparent cost based on usage?
One side of the debate is convinced that most IT organizations will opt for internal clouds. These are "cloudy" environments that are implemented within a company's own data center(s). The case for this perspective is often put with an argument that, before IT orgs reach out to external cloud providers, they'll want to get better use out of the equipment they already have. A number of major technology vendors are full-bore behind this approach, including IBM, HP, and EMC. And now a new company threw its hat into the ring: Cisco. Each of these companies provides hardware, software, and services designed to transform data centers from siloed, immalleable, opaquely priced compute environments to agile technology foundations that can easily provision new resources, orchestrate entire application stacks across multiple servers, and provide transparent (and low) pricing to compute users. What's not to like?
The other side of the debate maintains that external cloud providers can leverage enormous economies of scale with infrastructure designed for automation to offer a far better computing environment than will ever be possible from a company's own data centers. From a user perspective, external clouds remove much of the hardware complexity, take on much of the system management burden, and remove provisioning planning as a necessary task in project planning. What's not to like?
Essentially, this is a disagreement rooted in whether owning computing resources is critical to a company's strategy and/or reduces risk by maintaining control within the user organization. And certainly there are strong arguments for each position. Nevertheless, I see significant challenges for the internal cloud perspective--at least for the near future--for the following reasons:
* Implementing an internal cloud imposes a new layer of software and requires new investment in hardware to make existing resources more agile. Put another way, the existing resources can't be made more agile without additional cost. It's hard to make a case for spending even more money on IT infrastructure during a recession when most IT groups are being told to cut budgets. This is especially true, given that there aren't really any examples of internal cloud environments to point to as successful case studies. If one was more cynical, one might note that the companies flogging this new whizbangie cloud stuff are the same companies that got IT organizations into the mess they're in today, stuck with expensive silos of technology and data. It certainly seems unlikely that many IT organizations are going to be signing up to spend millions in today's economic climate. It's likely that the current recession will hinder internal cloud initiatives.
* Is there unused equipment available to serve as internal cloud infrastructure? It's pretty unlikely that existing, working, systems are going to be upended to make them agile. Most IT organizations are loathe to modify systems that are in and running. So what equipment will serve as the internal cloud resources? It might be possible that the IT organization has gone through a consolidation effort with virtualization and has no-longer-needed equipment sitting around, so perhaps that can be used as the basis for an internal cloud. Most unneeded hardware is probably the oldest, least useful stuff, poorly suited to serve as the underpinnings of a cloud environment. And the point of server consolidation is to get rid of equipment, so it may be that there isn't any unnecessary equipment left in the data center. The question remains, though, what equipment will be used, since buying a bunch of new stuff is pretty unattractive.
* It's not consistent with the ways new platforms have been adopted in the past. When new platforms like client/server or the Web come along, most organizations do not change what they've already got that is running on older software platforms; instead they put new systems on the new platform, while leaving the legacy systems unchanged. A big motivation for the whole SOA movement is to enable new platforms to access older applications via an encapsulation and RPC interface. The internal cloud hypothesis obligates IT orgs to impose the new platform on the existing infrastructure, which is not consistent with how things have been done in the past.
With respect to the reservations regarding external cloud adoption, I covered them extensively in my recent series of posts on "The Case Against Cloud Computing," so I won't rehash those topics. Suffice it to say, IT organizations have significant reservations regarding external cloud use, which are voiced quite loudly. To quote one comment offered about a recent posting of mine:
"Imagine if an employee from a "Cloud Provider" decided to sell private company information to a rival for personal gain (Please don't say we are protected 100 percent from this). The costs associated with this type of activity are not accounted for in your estimations / hypotheses above. What if we calculated lost productivity due to network bandwidth, outages that cause employees to have to work overtime etc...? The list goes on and on. Cloud computing is a nice dream, but when the "Bean Counters" start adding up the costs associated with the obstacles listed above, I am sure the decision landscape changes dramatically."
What's interesting about this commenter's list is that he proffers these items as only counted as a cloud computing risk, with no consideration given to their counterparts for internal data centers. It's as if only an external service provider would suffer outages or that an internal company employee would never sell data. The assessment isn't a black vs. white comparison, it's figuring out which alternative is the lightest shade of gray, all factors considered.
Indeed.com aggregates job postings from around the Internet; you can set search terms against its database to identify jobs. As you can see, over the past year job postings seeking cloud computing talents have jumped enormously. While still a small percentage of total jobs, the growth is near vertical. Given that there really aren't many internal clouds, these postings must be for skills relating to external clouds, probably mostly Amazon AWS. So despite the commenter's assertion above that obstacles will prevent cloud adoption, it seems that, in the real world, companies are willing to accept those risks as part of the cost of using the cloud.
So, is a recession good or bad for cloud computing? I believe that large expenditures for internal clouds will be problematic for the next year or so; however, as the chart indicates, there seems to be an enthusiasm for use of external clouds. Since most people believe that one of cloud computing's strengths is reducing IT operational spend, I'm willing to bet that, a year from now, the chart will continue to rise on a steep trajectory. Overall, I'd say tough times tend to bring forth innovative approaches, and cloud computing is likely to be a beneficiary of the current recession.