AWS Elastic Beanstalk services, which is an application development and deployment platform, also experienced delays in launching, updating and deleting environments, which was resolved around the same time as the EBS issue.
Since yesterday's outage there has been talk in some circles about spreading AWS workloads across multiple availability zones as a way to increase the fault tolerance of your cloud deployment. AWS offers multiple availability zones within the various regions the company operates data centers in, such as the US-East region in Northern Virginia. It says the availability zones are isolated from one another to improve the tolerance to such issues.
But Network World reader Biju Chacko commented that he experienced a multiple-AZ failure. "This is clearly an AWS screwup - their recommended redundancy strategies are not working," he wrote in a comment.
This is the third significant outage AWS has experienced in the past two years. In late June, powerful storms that led to power outages in the mid-Atlantic region were partially the cause of an outage that then was worsened by bugs and bottlenecks within AWS's system. The company issued a detailed postmortem report after that event.
In April 2011, AWS experienced another major outage that took down Reddit, Foursquare, HootSuite, Quora and others, some for as many as four days.