The biggest re:Invent 2025 serverless announcements


Two weeks ago, I gave you the biggest serverless announcements pre-re:Invent (see here). So here are the biggest serverless announcements during re:Invent 2025.

Lambda Managed Instances

Here’s the official announcement.

A common pushback against Lambda is that “it’s expensive at scale” because:

1) Each execution environment can only process one request at a time, wasting available CPU cycles while you wait for IO response.

2) Paying for execution time is less efficient when handling thousands of requests per second, especially given the above.

Lambda Managed Instances address these concerns.

You keep the same programming model with Lambda and the same event triggers.

But instead of your function running in a shared pool of bare metal EC2 instances, you can now instruct AWS to use EC2 instances from your account instead. Importantly, AWS still manages these EC2 instances for you, including OS patching, load balancing and auto-scaling.

It gives you more control and flexibility, e.g. what instance types to use (but no GPU instances) and the memory-to-CPU ratio.

See this post for my more in-depth coverage of this new feature.

Lambda Durable Functions

Here’s the official announcement.

This is my favourite announce from re:Invent 2025 :-)

Lambda Durable Functions use a replay mechanism similar to how reState works and how I implemented durable execution on Lambda for a client project.

The basic idea is simple – you can add checkpoints along the execution and the Lambda service will re-invoke your function from the start and skip over previously executed checkpoints when:

  • The initial invocation timed out.
  • You called context.wait to pause the current invocation.
  • You used context.invoke to invoke another function and wait for its response (which suspends the current invocation).
  • You created a callback and awaiting its response.

Here’s a handy visualization from the official documentation.

In addition to the usual Lambda function timeout (max 15 mins), Durable Functions also have a “execution timeout” for the total duration of a durable execution which can span over multiple invocations. The max execution timeout is 1 year.

Durable functions can be invoked both synchronously and asynchronously.

However, for synchronous invocations, the max execution timeout is limited to 15 mins. Whereas asynchronous invocations can have execution timeout of up to 1 year.

Durable functions also work with all event source mappings. But ESM-triggered invocations are also limited to a max execution duration of 15 mins.

Additionally, Durable Functions support DLQs, but they DO NOT support Lambda destinations.

Durable Functions blur the line between Lambda and Step Functions. I’m still organizing my thoughts on how to choose between them, but off the top of my head, these are areas where I think Step Functions wins over Lambda Durable Functions:

  • Visualization: being able to design and visualize the workflow as well as its executions. This is especially useful when working with non-technical state holders.
  • Exactly-once execution: standard workflows gives you exactly-once execution (for 90 days) based on the name of the execution.
  • Parallel processing: the context.parallel function of the Durable execution SDK does not actually guarantee parallel processing. In Node.js, it’s essentially a wrapper around promise.all, which gives you concurrency, not parallelism. So if you need to process large amounts of data in parallel, e.g. as part of a map-reduce task, then you want Step Function’s Parallel state.

The replay mechanic also has some interesting failure modes and gotchas. More on that in another post! Or, you can learn all about them in my next Production-Ready Serverless workshop ;-)

S3 Vectors goes GA with better scale and performance

Here’s the official announcement.

S3 Tables support intelligent-tiering and replication

Here’s the official announcement.

CloudFront supports mutual TLS authentication

Here’s the official announcement.

And a lot of AI-related announcements, such as Nova 2 models.

Master Serverless

Join 17K readers and level up you AWS game with just 5 mins a week.

Read more from Master Serverless

Lambda Durable Functions is a powerful new feature, but its checkpoint + replay model has a few gotchas. Here are five to watch out for. Non-deterministic code The biggest gotcha is when the code is not deterministic. That is, it might do something different during replay. Remember, when a durable execution is replayed, the handler code is executed from the start. So the code must behave exactly the same given the same input. If you use random numbers, or timestamps to make branching...

Hi, I have just finished adding some content around Lambda Managed Instances (LMI) to my upcoming workshop. I put together a cheatsheet of the important ways that LMI is different from Lambda default and thought maybe you'd find it useful too. You can also download the PDF version below. Lambda default vs. Lambda managed instances.pdf

Like London buses, we've waited years for true innovations to the Lambda platform and two came at the same time! Lambda Managed Instances Lambda Durable Functions I will be updating the Production-Ready Serverless workshop to cover these new features in the January cohort. For now, let's take a closer look at Lambda Managed Instances, why you should care and when to use it. Introducing Lambda Managed Instances A common pushback against Lambda is that "it's expensive at scale" because: 1) Each...