At 00:25 UTC on the morning of May 8, one availability zone of one region of one cloud provider began to fail in a structurally interesting way. The AWS Health Dashboard describes the cause with admirable composure: a thermal event. The site of the thermal event is use1-az4, an availability zone in the company's Northern Virginia us-east-1 region β a region that is, in The Register's preferred adjective, notorious. The hardware is off. The customer workloads that were running on the hardware are off. The services that are nominally global, but which happen to thread their control plane through us-east-1, are degraded. The dashboard's own description: "EC2 instances and EBS volumes hosted on impacted hardware are affected by the loss of power during the thermal event." And the second sentence: "Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone may also experience impairments."
That second sentence is doing more work than it would like to be doing.
What a thermal event actually is
Datacenters get hot. The hot is not, on the inside, a metaphor. Tens of thousands of racks, each pulling kilowatts of power, each shedding all of that power as heat into the room they sit in, are kept in service by chillers and pumps and air handlers whose job is to move that heat out of the building before the silicon inside makes its own pre-emptive arrangements. Thermal event is corporate-PR for the moment when those arrangements catch up. The cooling system slows or stops; the racks heat up; the firmware decides, correctly, that off is preferable to on fire; and the customers' workloads go away in lockstep, because the customers' workloads were the thing the racks were running.
The composure of the dashboard text is the diagnostic. Thermal event presents the failure mode as if it were a meteorological phenomenon β something that happened to the building, rather than something the building did to itself when the cooling design ran out of margin. The phrase is true. It is also a categorical sleight of hand. The honest description of what happened in use1-az4 between 00:25 UTC and now, depending on when now is, is that the building got too hot to keep running the customer workloads on its racks, and the operators did not notice in time to turn things off in an orderly way.
Blast radius
What is off, per the dashboard at the time of writing, is a list. EC2 and EBS in use1-az4, primarily β the compute and the block storage that customer workloads were depending on. Then the AZ-cascading list: IoT Core, Elastic Load Balancer, NAT Gateway, Redshift, all of which had control-plane or data-plane components in the affected hardware. Then the global-with-us-east-1-dependency list, which is the part that turns a single-AZ failure into a planetary one: IAM, CloudFront, Route 53, DynamoDB Global Tables. These are the services your engineering team was assured were redundant. They are still redundant. The redundancy just routes through us-east-1.
The named-customer roster, at time of writing, is partial β outages in progress accumulate names slowly, because the affected companies' status pages take longer to update than their workloads take to fail. Coinbase, per public reporting, has had core exchange functions disrupted for more than five hours. KoboToolbox, the humanitarian-data-collection platform whose Global instance went offline at 00:32 UTC, posted an announcement to its community forum shortly afterward. There are more names. There will be more names. They will arrive on a schedule determined by how long each affected company's communications team takes to admit, in writing, that the company is not in fact serving traffic.
The hold-music economy
What this looks like from the customer side, at thousands of companies simultaneously, is the same scene rendered in different SaaS dashboards. An on-call engineer is paged. The pager is loud. The dashboard is red. The Slack channel is full. The runbook says failover to another region. The runbook has not been tested in many months, and the Terraform that would carry out the failover is several versions out of date. The team is on a video call with managers, architects, and a customer-success representative who is asking when the customer-facing status page will be updated. Someone has been on hold with AWS support for hours. The status-page updates have, with a few exceptions, said only that AWS is continuing to investigate. They have said this consistently for the duration of the incident, in the same composed tone, at regular intervals, in the same shape of paragraph.
This is what cloud-native looks like at the moment the cloud is having a bad morning. The dashboards work. The escalation paths exist. The runbooks were written. The SLAs are documented. None of these instruments are designed to do the one thing the operator on the call actually needs them to do, which is put the customer's workload back on a different building's worth of hardware. The architecture is dependent on the building. The building is, at the moment, a thermal event.
The pattern has a history
The current outage is not the first time us-east-1 has produced a multi-hour, cross-customer impairment that read in real time as a global failure. The pattern has a history. The most recent comparable cases:
| Date | What failed | Approximate duration | Public impact |
|---|---|---|---|
| 2026-05-08 |
use1-az4 thermal event (EC2 / EBS power loss) |
In progress at time of writing | Coinbase (core exchange disrupted >5 hours), KoboToolbox; cascading impairment in IoT Core / ELB / NAT Gateway / Redshift / IAM / CloudFront / Route 53 / DynamoDB Global Tables |
| 2025-10-19/20 | DynamoDB DNS race condition | ~15 hours | 70+ AWS services; public impact at Slack, Atlassian, Snapchat |
| 2021-12-07 | Network device congestion in main AWS network | ~5β7 hours (10:30 AM ET β 2:22 PM PST recovery) | Netflix, Disney+, Robinhood, Slack, Roku, Instacart, Venmo, Tinder, Coinbase, plus Amazon's own e-commerce + Alexa + Kindle |
| 2017-02-28 | S3 maintenance command β typo removed larger server set than intended | ~4 hours | Slack, Trello, Quora, Sprinklr, Venmo, parts of Apple iCloud; estimated $150M aggregate cost across the S&P 500 |
The cases differ in their proximate causes β a cooling failure, a DNS race condition, network-device congestion, a typo. They are alike in two ways. First, each was confined to a single region and produced effects far beyond it, because services that bill themselves as global are, in their control planes, regional. Second, each was followed by a public post-mortem of admirable technical clarity, which named the proximate cause and the engineering response and which did not, in any case, name the customer-side architectural decision that produced the multi-hour outage at all of those customers in lockstep.
The economics of the dependency
That decision, on the customer side, was rational at the time it was made. Multi-region failover costs more than single-region operation in dollars, in engineering-time, in test-harness complexity, and in the quiet ongoing tax of running two of everything for the small fraction of time the second one is needed. The expected value of the tax, weighed against the expected outage cost, did not come out in its favour for the median engineering team. Single-region in us-east-1, and rely on AWS's published reliability numbers for the rest was the answer most teams arrived at, separately, at companies that did not know each other and were not coordinating their bets.
The bet was the same bet. The bet was that no single-AZ failure would produce a multi-hour, customer-visible outage frequently enough to justify the multi-region tax. On most days, including the day before yesterday and the day before that, the bet paid. The day the bet does not pay is the day the bet does not pay all at once, at every company that placed it, on the same morning, on the same dashboards, with the same hold music.
Coda
The thermal event will end. Cooling capacity is being restored as of the most recent dashboard updates. The racks will come back, the EC2 instances will resume, the EBS volumes will reattach, the cascading services will catch up, and at some point in the hours ahead the dashboard will mark the incident resolved in the same composed register it has used throughout. The post-mortem, when it arrives, will be technically excellent. It will explain the cooling failure with admirable clarity. It will name the design margin that ran out, identify the corrective work AWS is undertaking, and commit to the operational improvements intended to prevent this specific failure mode from recurring in this specific shape.
What the post-mortem will not contain is the architectural decision the cooling failure exposed. The single-region single-vendor concentration that turned a building's HVAC into a global SaaS failure mode is not AWS's post-mortem to write. It is the customers'. It will be in the next post-mortem. And the one after that.













![Carlos Cortez π΅πͺ [AWS Hero]](https://media2.dev.to/dynamic/image/width=640,height=640,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F493613%2Fe7110737-077b-4e9e-9745-3c998b0f4b31.jpg)