|
The DNS failure was the first symptom, not the root cause of the recent AWS outage. The root cause was a race condition in an internal DynamoDB microservice that automates DNS record management for the regional cells of DynamoDB. Like many AWS services, DynamoDB has a cell-based architecture. (see my conversation with Khawaja Shams, who used to lead the DynamoDB team, on this topic) Every cell has an automated system that keeps its DNS entries in sync. That automation system has two main components:
The race condition happened between the DNS enactors and ultimately applied an empty DNS record and rendered the DynamoDB service inaccessible. Because EC2's control plane uses DynamoDB to manage distributed locks and leases (to avoid race conditions), the outage to DynamoDB meant EC2 couldn't launch new instances. New instances were created at the hypervisor level, but their networking configuration never completed. Then NLB marked newly launched instances as unhealthy, because their networking state wasn’t completed. This triggered large-scale health check failures, removing valid back-ends from load balancers. So yeah, DNS was the first symptom of the problem, but it wasn't the root cause of the outage. That honour belongs to the race condition in the DNS management system inside DynamoDB! (or, you can also go a "why?" further and attribute the root cause to whatever caused the unusual high delay in the enactors, but it wasn't explained in the post mortem) You can read the full post mortem here, it's quite long but worth a read. |
Join 17K readers and level up you AWS game with just 5 mins a week.
AI agents can now scan an entire open-source codebase for exploitable vulnerabilities in hours. Frontier models carry the complete library of known bug classes in their weights. So you can simply point an AI agent at a codebase and tell it to find zero-days. This isn't theoretical. Willy Tarreau, the HAProxy lead developer, reports that security bug reports have jumped from 2–3 per week to 5–10 per day. Greg Kroah-Hartman, the Linux kernel maintainer, described what happened: "Months ago, we...
Lambda Durable Functions makes it easy to implement business workflows using plain Lambda functions. Besides the intended use cases, they also let us implement ETL jobs without needing recursions or Step Functions. Many long-running ETL jobs have a time-consuming, sequential steps that cannot be easily parallelised. For example: Fetching data from shared databases/APIs with throughput limits. When data needs to be processed sequentially. Historically, Lambda was not a good fit for these...
Step Functions is often used to poll long-running processes, e.g. when starting a new data migration task with Amazon Database Migration. There's usually a Wait -> Poll -> Choice loop that runs until the task is complete (or failed), like the one below. Polling is inefficient and can add unnecessary cost as standard workflows are charged based on the number of state transitions. There is an event-driven alternative to this approach. Here's the high level approach: To start the data migration,...