It wasn't DNS after all


The DNS failure was the first symptom, not the root cause of the recent AWS outage.

The root cause was a race condition in an internal DynamoDB microservice that automates DNS record management for the regional cells of DynamoDB.

Like many AWS services, DynamoDB has a cell-based architecture.

video preview

(see my conversation with Khawaja Shams, who used to lead the DynamoDB team, on this topic)

Every cell has an automated system that keeps its DNS entries in sync.

That automation system has two main components:

  • a DNS Planner generates a plan for how DNS records should look.
  • DNS Enactors to apply those plans in Route 53.

The race condition happened between the DNS enactors and ultimately applied an empty DNS record and rendered the DynamoDB service inaccessible.

Because EC2's control plane uses DynamoDB to manage distributed locks and leases (to avoid race conditions), the outage to DynamoDB meant EC2 couldn't launch new instances.

New instances were created at the hypervisor level, but their networking configuration never completed.

Then NLB marked newly launched instances as unhealthy, because their networking state wasn’t completed. This triggered large-scale health check failures, removing valid back-ends from load balancers.

So yeah, DNS was the first symptom of the problem, but it wasn't the root cause of the outage. That honour belongs to the race condition in the DNS management system inside DynamoDB!

(or, you can also go a "why?" further and attribute the root cause to whatever caused the unusual high delay in the enactors, but it wasn't explained in the post mortem)

You can read the full post mortem here, it's quite long but worth a read.

Master Serverless

Join 17K readers and level up you AWS game with just 5 mins a week.

Read more from Master Serverless

Lambda Durable Functions is a powerful new feature, but its checkpoint + replay model has a few gotchas. Here are five to watch out for. Non-deterministic code The biggest gotcha is when the code is not deterministic. That is, it might do something different during replay. Remember, when a durable execution is replayed, the handler code is executed from the start. So the code must behave exactly the same given the same input. If you use random numbers, or timestamps to make branching...

Hi, I have just finished adding some content around Lambda Managed Instances (LMI) to my upcoming workshop. I put together a cheatsheet of the important ways that LMI is different from Lambda default and thought maybe you'd find it useful too. You can also download the PDF version below. Lambda default vs. Lambda managed instances.pdf

Two weeks ago, I gave you the biggest serverless announcements pre-re:Invent (see here). So here are the biggest serverless announcements during re:Invent 2025. Lambda Managed Instances Here’s the official announcement. A common pushback against Lambda is that “it’s expensive at scale” because: 1) Each execution environment can only process one request at a time, wasting available CPU cycles while you wait for IO response. 2) Paying for execution time is less efficient when handling thousands...