Member-only story
Let’s Stop Talking About Serverless Cold Starts
Whenever we talk about serverless, there’s always the one person who brings up cold starts

I was eating dinner the other day with my family. My four year old looked at her plate and said to me, “I don’t want this chicken.”
Naturally, I asked her why to which she responded, “It has sauce on it.”
See, a few months ago, we had a meal with a glaze on it that she… hated. Since then, anything that looked like sauce was a no go for her. If it wasn’t a solid, it was a sauce, and she would reject it.
Super annoying as a parent 😒.
She latched onto an idea, generalized it, and now was too worked up to try anything related.
Sound familiar?
Every presentation I’ve given on serverless, someone has brought up cold starts. They heard about them a few years ago and have latched onto the idea that serverless is a non-starter for them as long as they exist.
So I asked a probing question on Twitter:
I received various answers, but the most common response was, “ It’s not a real problem in production.”
There are exceptions, of course, but most of the exceptions have valid workarounds. Let’s dive in and see if we can put this argument to bed once and for all.
What Are Cold Starts?
Cold starts occur on the first request to invoke your function or when the Lambda runtime environment is busy processing your business logic.
Lambda must initialize a new container to execute your code; that initialization takes time — hence, cold starts.
This can happen when users start logging into your application for the day or when a burst of traffic comes through. Traffic bursts refer to points throughout the day with minimal usage, followed by a big spike in traffic. This results in cold starts as Lambda begins to…