|
During this week's live Q&A session, a student from the Production-Ready Serverless boot camp asked a really good question (to paraphrase): "When end-to-end testing an Event-Driven Architecture, how do you limit the scope of the tests so you don't trigger downstream event consumers?" This is a common challenge in event-driven architectures, especially when you have a shared event bus. The ProblemAs you exercise your system through these tests, the system can generate events that are consumed by downstream systems. These can create a lot of noise for the downstream systems, especially if we use test events that they can't process. For example, maybe our test events do not contain all the fields, only the ones that we need to exercise our code. Or the event might reference external entities that do not exist (but our system doesn't need to verify). These often trigger errors and alerts for the downstream systems and make us a bad neighbour! I have long championed the use of ephemeral environments to allow developers to work on different features in isolated environments. It's an excellent fit for working with serverless technologies and their usage-based pricing. There's negligible cost overhead for having many ephemeral environments when you are not paying for uptime. However, ephemeral environments do not directly address the problem at hand. Events generated by end-to-end tests against the ephemeral environments will still cause the undesired side effects downstream. The SolutionOne way to address this problem is to conditionally create a copy of the shared resource (e.g. an event bus) as part of the service stack. When you create an ephemeral environment, you will make a copy of the event bus (local to the system under test) and use it instead of the shared event bus. Thus, you can achieve the desired separation between environments and avoid waking up your downstream neighbours! Related resources, such as IAM roles, resource policies, etc., must also be created conditionally. Importantly, it allows teams to develop, deploy, and test their services independently and reduces cross-team dependency, a key indicator of high performance (as noted in Accelerate: The Science of Lean Software and DevOps by Nicole Forsgren, Jez Humble, and Gene Kim) The ImplementationYou can implement this solution with any Infrastructure-as-Code tool. With CloudFormation or tools that are built upon CloudFormation (e.g. SAM, Serverless Framework), you can use CloudFormation Conditions. I have also created a plugin for the Serverless Framework to make it easier to express conditions like this: With Terraform, you can use the count meta-argument, like this: Other ApproachesAnother approach is for everyone to agree that:
I do not recommend this approach because it requires coordination from all participants (both event publishers and subscribers). A standard abstraction layer is key for this approach to work. However, implementing consistent behaviour across the board can be challenging, especially if you need to support multiple programming languages and IaC tools. It only takes one non-conforming participant to break the whole chain. This approach adds complexity to both event publishers and consumers. Whereas the aforementioned approach only affects event publishers, and event consumers are none the wiser. However, if most consumers in your system are also publishers, then there may not be much difference in implementation overhead. |
Join 17K readers and level up you AWS game with just 5 mins a week.
Lambda Durable Functions comes with a handy testing SDK. It makes it easy to test durable executions both locally as well as remotely in the cloud. I find the local test runner particular useful for dealing with wait states because I can simply configure the runner to skip time! However, this does not work for callback operations such as waitForCallback. Unfortunately, the official docs didn't include any examples on how to handle this. So here's my workaround. The handler code Imagine you're...
Lambda Durable Functions is a powerful new feature, but its checkpoint + replay model has a few gotchas. Here are five to watch out for. Non-deterministic code The biggest gotcha is when the code is not deterministic. That is, it might do something different during replay. Remember, when a durable execution is replayed, the handler code is executed from the start. So the code must behave exactly the same given the same input. If you use random numbers, or timestamps to make branching...
Hi, I have just finished adding some content around Lambda Managed Instances (LMI) to my upcoming workshop. I put together a cheatsheet of the important ways that LMI is different from Lambda default and thought maybe you'd find it useful too. You can also download the PDF version below. Lambda default vs. Lambda managed instances.pdf