"We are entering the era of utility supercomputing where anybody can dial up computational resources and massive storage requirements on the fly," said Wood. "Traditionally these organisations would have to provision for 10-15% over the peaks in demand, but the cloud allows for bursty scalability, lowering the barriers to entry and allowing them to spend at least 70% of their time on differentiated work, rather than keeping the light on."
Wood's assertion builds on the ideas of Jason Stowe, CEO of Cycle Computing, who first proposed the concept of utility supercomputing in October 2011. Cycle Computing helps researchers and businesses run supercomputing applications on Amazon's EC2 infrastructure.
"The problem is, today, researchers are in the long-term habit of sizing their questions to the compute cluster they have, rather than the other way around. This isn't the way we should work. We should provision compute at the scale the questions need," said Stowe in October.
"We're talking about taking questions that require a million hours of computation, and answering them in a day. Securely. At reasonable cost.
"Scratch the surface of this idea, and you'll see a world of research the way I see it. No more waiting. No more R&D folks task-switching for days or weeks while compute is run. Only answers at the speed of thought, at the speed of invention, at the scale of the question."
Amazon in November launched a public beta of Cluster Compute Eight Extra Large (CC2), its most powerful cloud service yet. Every CC2 instance has two Intel Xeon E5 processors, each with eight hardware cores, as well as 60.5GB of RAM and 3.37TB of storage. It communicates with other instances - or virtual servers - in a cluster using 10 gigabit ethernet.