Menu
AWS says a typo caused the massive S3 failure this week

AWS says a typo caused the massive S3 failure this week

The cloud provider is implementing several changes to prevent similar events

Everyone makes mistakes. But working at Amazon Web Services means an incorrectly entered input can lead to a massive outage that cripples popular websites and services.

That's apparently what happened earlier this week, when the AWS Simple Storage Service (S3) in the provider's Northern Virginia region experienced an 11-hour system failure.

Other Amazon services in the US-EAST-1 region that rely on S3, like Elastic Block Store, Lambda, and the new instance launch for the Elastic Compute Cloud infrastructure-as-a-service offering were all impacted by the outage.

AWS apologized for the incident in a postmortem released Thursday. The outage affected the likes of Netflix, Reddit, Adobe, and Imgur. More than half of the top 100 online retail sites experienced slower load times during the outage, website monitoring service Apica said.

Here’s what set off the outage, and what Amazon plans to do:

According to Amazon, an authorized S3 employee executed a command that was supposed to "remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process," in response to the service's billing process working more slowly than anticipated.

One of the parameters for the command was entered incorrectly and took down a large number of servers that support a pair of critical S3 subsystems.

The Index subsystem “manages the metadata and location information of all S3 objects in the region,” while the placement subsystem “manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate.”

While those subsystems are built to be fault tolerant, the number of servers shut down required both to be fully restarted.

As it turns out, Amazon hasn't fully restarted those systems in its larger regions for several years, and S3 has experienced massive growth in the intervening time. Rebooting those subsystems took longer than expected, which added to the length of the outage.

In response to this incident, AWS is making several changes to its internal tools and processes. The tool that was responsible for causing the outage has been modified to take down servers more slowly and to block operations that will take capacity below safety check levels.

AWS is also evaluating its other tools to make sure they have similar safety systems in place.

AWS engineers are also going to start refactoring the S3 index subsystem to help speed up reboots and reduce the blast radius of future problems.

The cloud provider has also changed its Service Health Dashboard administration console to run across multiple regions. AWS employees were unable to update the dashboard during the outage because the console relied on S3 from the affected region.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags amazon.com

Featured

Slideshows

Malwarebytes shoots the breeze with channel, prospects

Malwarebytes shoots the breeze with channel, prospects

A Kumeu, Auckland, winery was the venue for a Malwarebytes event for partner and prospect MSPs - with some straight shooting on the side. The half-day getaway, which featured an archery competition, lunch and wine-tasting aimed at bringing Malwarebytes' local New Zealand and top and prospective MSP partners together to celebrate recent local successes, and discuss the current state of malware in New Zealand. This was also a unique opportunity for local MSPs to learn about how they can get the most out of Malwarebytes' MSP program and offering, as more Kiwi businesses are targeted by malware.

Malwarebytes shoots the breeze with channel, prospects
EDGE 2019: Channel forges new partnerships during evening networking

EDGE 2019: Channel forges new partnerships during evening networking

Partners, vendors and distributors reconnected during a number of social gatherings during EDGE 2019. The first evening saw the channel congregate for a welcome party at the Hamilton Island yacht club, while the main poolside proved to be the perfect stop for a barbecue on the final night.

EDGE 2019: Channel forges new partnerships during evening networking
Show Comments