Do You Think Serverless Is For Fatty Functions? Think Again.
You’re busy, I get it. So let me give you the main takeaway from this post right now.
You can fit your full-blown service in a single AWS Lambda function.
Did I capture your interest? Then keep on reading.
I was skeptical about serverless for a long time
As I was about containers few years ago. And about cloud computing before that.
The truth is, we’re always skeptical before any mindset shift. Reluctant to change. Unwilling to accept something new.
If you also take in account that serverless became somehow a support technology/concept for an ill-conceived definition of microservices, the resistance to shift is guaranteed.
But as every tool or technology, it’s how we use it and the value that we can extract that qualifies it as good or bad.
Serverless is a game changer
Don’t trust me on this. Read how people cleverer than me are predicting this through amazing models and maps (credits to swardley here).
What I can tell you for sure is that in my company a team of three developers, without any previous knowledge about the AWS ecosystem surrounding Lambda, was able to do the following in few days.
- they successfully run a skeleton go binary in AWS Lambda;
- they set logging and monitoring up by leveraging the integration with CloudWatch;
- they setup a CI/CD pipeline thanks to the integration between Github and Codebuild/CodePipeline;
- they were able to describe all needed resources through CloudFormation templates and AWS SAM;
- they added storage and made the application reactive by leveraging DynamoDB and Kinesis.
I repeat that. Few days, a team of three developers, no previous knowledge. No DevOps engineers involved.
By using serverless technologies, you can forget about the infrastructure. Even containers disappear.
Of course there’s a trade-off to be made in terms of costs, flexibility and customization. But all this comes later. You usually don’t need to worry about those factors when you start building something.
It’s insane how easily and quickly you can validate a business idea by focusing on your application logic only.
So should I put my whole application in a single Lambda?
It depends.
If you’re starting from scratch and you need to quickly build an MVP, it really makes sense to put everything in one lambda.
With monolithic applications, you have two choices.
- Assess whether it makes sense and is technically feasible to move everything in a single lambda (usually not) then work your way around a refactoring through separation of concerns;
- or keep the monolith as it is and start building around the right amount of lambda services after you have identified their boundaries (you can use EventStorming and context mapping for that).
Regardless of your case, the rule of thumb is to find the boundaries of your domain contexts and logically isolate them into services.
Embrace reactiveness
By its nature, the serverless runtime has limitations.
The maximum time of a single function execution in AWS Lambda is currently 5 minutes. And I bet that your application has batch processes or long running jobs that take more than that to finish.
This is why is difficult to move legacy monolithic applications in a serverless environment. Even if you’re on the lucky side of a greenfield, that limit is going to stand in your way.
How can you work around it? By making your application reactive and, more specifically, event-driven.
In a serverless, event-driven application, you trigger the execution of your function with events you published in previous executions.
Let’s give an example.
Use case: as Amazon.com, I want to generate all related transactions between me and sellers when a customer successfully pays for an order.
There are a couple of things happening here.
- an order is created by a customer;
- payment is confirmed;
- single seller orders and order items are created;
- transactions for sales and commissions are generated.
In a monolithic, non event-driven application, this process would take place serially, in the same process. Maybe the developers could wisely decouple the code by distributing the process across 4 services (shopping cart, payment, seller order management, finance) but that’s as far as you go.
In an event-driven application, this is what would happen:
- when the customer confirms her order, a ShoppingCart.OrderWasConfirmed event is published;
- the payment service listens to that event, process the related payment and publishes a Payment.OrderWasPayed event;
- the seller order management service also listens to ShoppingCart.OrderWasConfirmed and creates the single seller order with order items and customer details;
- the finance service listens to Payment.OrderWasPayed and generates the related sales transactions and commissions.
Implementing this process in the serverless world means that your function will be executed every time an event is published.
This is for example what could happen under the hood. I assume that Amazon.com uses AWS.
- The customer confirms her order by clicking a button in the UI.
- A POST request triggers the first lambda execution through an API Gateway endpoint. The lambda has a router which can handle events coming from the AWS API Gateway. The request is handled and a ShoppingCart.OrderWasConfirmed event is published into a dedicated Kinesis topic. The execution at this point terminates.
- As soon as the event is published, the lambda is triggered again with the ShoppingCart.OrderWasConfirmed as an input argument; the router starts the related process step by calling the payment service which in turn publishes a Payment.OrderWasPayed event into a dedicated Kinesis topic. The execution terminates.
- Again, the event is used as a trigger for two more lambda execution, calling the seller order management and the finance services.
By using this approach, you can separate every step of your business processes into single function executions, overcoming time and other resources limitations. At the same time, you’re naturally led to improve the architecture of your application.
It’s a win-win situation.