Amazon cites cause of recent outage, issues refunds

By Brandon Butler, Network World |  Cloud Computing, Amazon Web Services

An unexpected bug cropped up after new hardware was installed in one of Amazon Web Service's Northern Virginia data centers, which caused the more than 12-hour outage last week that brought down popular sites such as Reddit, Imgur, AirBNB and Salesforce.com's Heroku platform, according to a post-mortem issued by Amazon.

In response to the outage, AWS says it is refunding certain charges to customers impacted by the outage, specifically those who had trouble accessing AWS application program interfaces (APIs) during the height of the downtime event.

AWS says the latest outage was limited to a single availability zone in the US-East-1 region, but an overly aggressive throttling policy, which it has vowed to fix as well, spread the issue for some customers into multiple zones.

RELATED: How to make sure your app doesn't go down in the next cloud outage

MORE CLOUD: 12 free cloud storage services

The problem arose Oct. 22 from what AWS calls a "latent memory bug" that appeared after a failed piece of hardware had been replaced in one of Amazon's data centers. The system failed to recognize the new hardware, which caused a chain reaction inside AWS's Elastic Block Storage (EBS) service, and eventually spread to its Relational Database Service (RDS) and its Elastic Load Balancers (ELBs). Reporting agents inside the EBS servers kept attempting to use the failed server that had been removed.

"Rather than gracefully deal with the failed connection, the reporting agent continued trying to contact the collection server in a way that slowly consumed system memory," the post-mortem reads. It goes on to note that "our monitoring failed to alarm on this memory leak."

CAN YOU PREDICT AN OUTAGE? Startup claims it saw early signs of Amazon's cloud outage

AWS says it's difficult to set accurate alarms for memory usage because the EBS system dynamically uses resources as needed, therefore memory usages fluctuate frequently. The system is supposed to work with a degree of fault tolerance for missing servers but eventually the memory loss became so severe that it started impacting customer requests. From there, the issue snowballed -- "the number of stuck volumes increased quickly," AWS reports.


Originally published on Network World |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question