AppSync's new async Lambda resolver is great news for GenAI apps


A common challenge in building GenAI applications today is the slow performance of most LLMs (except ChatGPT 4-o and Groq). To minimize delays and enhance user experience, streaming the LLM response is a must.

As such, we see a common pattern emerge in AppSync:

  1. The caller makes a GraphQL request to AppSync.
  2. AppSync invokes a Lambda resolver.
  3. The Lambda function queues up a task in SQS.
  4. The Lambda resolver returns so that AppSync can respond to the caller immediately. In the meantime, a background SQS function picks up the task and calls the LLM.
  5. The caller receives an acknowledgement from the initial request.
  6. The background function receives the LLM response as a stream and forwards it in chunks (as they are received) to the caller via an AppSync subscription.

This workaround is necessary because AppSync could only invoke Lambda functions synchronously. To support response streaming, the first function has to hand off calling the LLM to something else.

AppSync now supports async Lambda invocations.

On May 30th, AppSync announced [1] support for invoking Lambda resolvers asynchronously.

This works for both VTL and JavaScript resolvers. Setting the new invocationType attribute to Event will tell AppSync to invoke the Lambda resolver asynchronously.

Here's how the VTL mapping template would look:

And here's the JavaScript resolver:

The response from an async invocation will always be null.

The new architecture

With this change, we no longer need the background function.

  1. The caller makes a GraphQL request to AppSync.
  2. AppSync invokes a Lambda resolver asynchronously.
  3. AppSync immediately receives a null response and can respond to the original request.
  4. The Lambda function receives the LLM response as a stream and forwards it in chunks (as they are received) to the caller via an AppSync subscription.

This is a simple yet significant quality-of-life improvement from the AppSync team.

It's not just for GenAI applications. The same pattern can be applied to any long-running task requiring more than AppSync's 30s limit.

Links

[1] AWS AppSync now supports long running events with asynchronous Lambda function invocations

Master Serverless

Join 17K readers and level up you AWS game with just 5 mins a week.

Read more from Master Serverless

Lambda Durable Functions makes it easy to implement business workflows using plain Lambda functions. Besides the intended use cases, they also let us implement ETL jobs without needing recursions or Step Functions. Many long-running ETL jobs have a time-consuming, sequential steps that cannot be easily parallelised. For example: Fetching data from shared databases/APIs with throughput limits. When data needs to be processed sequentially. Historically, Lambda was not a good fit for these...

Step Functions is often used to poll long-running processes, e.g. when starting a new data migration task with Amazon Database Migration. There's usually a Wait -> Poll -> Choice loop that runs until the task is complete (or failed), like the one below. Polling is inefficient and can add unnecessary cost as standard workflows are charged based on the number of state transitions. There is an event-driven alternative to this approach. Here's the high level approach: To start the data migration,...

Lambda Durable Functions comes with a handy testing SDK. It makes it easy to test durable executions both locally as well as remotely in the cloud. I find the local test runner particular useful for dealing with wait states because I can simply configure the runner to skip time! However, this does not work for callback operations such as waitForCallback. Unfortunately, the official docs didn't include any examples on how to handle this. So here's my workaround. The handler code Imagine you're...