Menu
AWS says a typo caused the massive S3 failure this week

AWS says a typo caused the massive S3 failure this week

The cloud provider is implementing several changes to prevent similar events

Everyone makes mistakes. But working at Amazon Web Services means an incorrectly entered input can lead to a massive outage that cripples popular websites and services.

That's apparently what happened earlier this week, when the AWS Simple Storage Service (S3) in the provider's Northern Virginia region experienced an 11-hour system failure.

Other Amazon services in the US-EAST-1 region that rely on S3, like Elastic Block Store, Lambda, and the new instance launch for the Elastic Compute Cloud infrastructure-as-a-service offering were all impacted by the outage.

AWS apologized for the incident in a postmortem released Thursday. The outage affected the likes of Netflix, Reddit, Adobe, and Imgur. More than half of the top 100 online retail sites experienced slower load times during the outage, website monitoring service Apica said.

Here’s what set off the outage, and what Amazon plans to do:

According to Amazon, an authorized S3 employee executed a command that was supposed to "remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process," in response to the service's billing process working more slowly than anticipated.

One of the parameters for the command was entered incorrectly and took down a large number of servers that support a pair of critical S3 subsystems.

The Index subsystem “manages the metadata and location information of all S3 objects in the region,” while the placement subsystem “manages allocation of new storage and requires the index subsystem to be functioning properly to correctly operate.”

While those subsystems are built to be fault tolerant, the number of servers shut down required both to be fully restarted.

As it turns out, Amazon hasn't fully restarted those systems in its larger regions for several years, and S3 has experienced massive growth in the intervening time. Rebooting those subsystems took longer than expected, which added to the length of the outage.

In response to this incident, AWS is making several changes to its internal tools and processes. The tool that was responsible for causing the outage has been modified to take down servers more slowly and to block operations that will take capacity below safety check levels.

AWS is also evaluating its other tools to make sure they have similar safety systems in place.

AWS engineers are also going to start refactoring the S3 index subsystem to help speed up reboots and reduce the blast radius of future problems.

The cloud provider has also changed its Service Health Dashboard administration console to run across multiple regions. AWS employees were unable to update the dashboard during the outage because the console relied on S3 from the affected region.


Follow Us

Join the New Zealand Reseller News newsletter!

Error: Please check your email address.

Tags amazon.com

Featured

Slideshows

Kiwi channel comes together for another round of After Hours

Kiwi channel comes together for another round of After Hours

The channel came together for another round of After Hours, with a bumper crowd of distributors, vendors and partners descending on The Jefferson in Auckland. Photos by Maria Stefina.​

Kiwi channel comes together for another round of After Hours
Consegna comes to town with AWS cloud offerings launch in Auckland

Consegna comes to town with AWS cloud offerings launch in Auckland

Emerging start-up Consegna has officially launched its cloud offerings in the New Zealand market, through a kick-off event held at Seafarers Building in Auckland.​ Founded in June 2016, the Auckland-based business is backed by AWS and supported by a global team of cloud specialists, leveraging global managed services partnerships with Rackspace locally.

Consegna comes to town with AWS cloud offerings launch in Auckland
Veritas honours top performing trans-Tasman partners

Veritas honours top performing trans-Tasman partners

Veritas honoured its top performing partners across the channel in Australia and New Zealand, recognising innovation and excellence on both sides of the Tasman. Revealed under the Vivid lights in Sydney, Intalock claimed the coveted Partner of the Year 2017 (Pacific) award, with Data#3 acknowledged for 12 months of strong growth across the market. Meanwhile, Datacom took home the New Zealand honours, with Global Storage and Insentra winning service provider and consulting awards respectively. Dicker Data was recognised as the standout distributor of the year, while Hitachi Data Systems claimed the alliance partner award. Photos by Bob Seary.

Veritas honours top performing trans-Tasman partners
Show Comments