GB-seconds, vCPU-seconds, invocations. These are not how we typically think about applications yet this is how serverless applications are priced. Each invocation costs a tiny fraction of a cent so it must be inexpensive, right? Maybe. But why not do a little bit of math to find out.
The Pricing Model
If you are not familiar with the pricing model for serverless functions, they are measured in GB-seconds. Essentially, for how many seconds per month are your functions executing and how much RAM are they using during that time. What confuses this further is that GB is not the smallest unit you can purchase a serverless function in. Lambda goes as low as 128MB. Seconds is also not the smallest unit that they are measured in. Execution time is always rounded up to the nearest 100ms. The Lambda free tier gives you 400,000 GB-seconds per month but that could be as many as 32 million invocations.
In case it isn't already complicated enough, you are also charged for each request made to your function. Bandwidth charges still apply. And if you plan to make your function available to the web, there are charges for API Gateway.
To take all of this into account for a cost comparison would be very complicated. So instead we are just going to look at the price of the functions themselves and compare that against the cost of running a virtual machine by itself.
Setup
We will compare Lambda against an m5-large instance on EC2 which has 2vCPUs and 8GB of RAM. Spot instances are most reflective of the behaviour of Lambda, but we will compare both Spot instances and On-Demand instances. Since EC2 is priced per hour, we will be using 730 hours to represent one month.
Spot instance pricing fluctuates but Amazon provides a graph of the history of the prices for the last three months. The higest spot price for an m5-large instance over the last three months is about $0.042/hour so that is what we will use. The On-Demand price for an m5-large instance is $0.096/hour.
Math
Service | Cost/Hr | Total hours | Total | % of Lambda |
---|---|---|---|---|
m5-large On-Demand | 0.096 | 730 | 70.08 | 20.39% |
m5-large Spot | 0.042 | 730 | 30.66 | 8.92% |
Lambda | 0.4800096* | 716.11** | 343.74 | 100% |
* To calculate the cost per hour for Lambda we take the cost per GB-second (0.000016667) x number of seconds in one hour (3600) x the number of GB (8) that we are comparing it to. The calculation looks like this: 0.000016667 x 3600 x 8 = 0.4800096. This is effectively now per 8GB-hour.
** To get the total, we have to account for the free tier. The free tier is 400,000 GB-seconds per month. To put that in terms of 8GB-hours we take the free tier (400,000) ÷ the number of GB (8) that we are comparing to ÷ the number of seconds in an hour. The calculation looks like this: 400,000 / 8 / 3600 = 13.89. This is the number of free hours. Then we subtract this from the number of hours in a month (730) to get 716.11.
What does it mean?
This means that if your Lambda functions are running more than 8.92% of the time in a month, it would be cheaper for you to run a server. That is 2 days 17 hours and 7 minutes.
I want to make it clear that I am not suggesting that a m5-large instance is cheaper for you. You still need to pick the appropriate size. If your functions are only using 2GB of RAM, then pick a smaller instance. The math still works out the same.
I also want to state that I know this is rough math. Yes, I understand that there is overhead from the OS when running an instance that means that the full amount of RAM is not available to you as well as dozens of other factors.
So what next?
Serverless functions can be a great approach, but in certain circumstances they might not be worth it. If you have a smaller project that you don't expect to get much traffic or if you don't know how to run a server, then it might make sense to try serverless. If you are in a large organization, I would bet that you already have a system administrator and even some extra server capacity already available to you.
If you are already using serverless, I would encourage you to do your own cost analysis. The situation is very similar with Google and Azure functions as well. Is it possible for you to lump some of your functions together onto one server? You might be surprised at how much money you could save.
Comments
Be the first to comment...