Speed Up Your CI Pipeline With Smaller Docker Images
The less you need to download, the faster you can run
CI/CD allows developers and organizations to move faster. By automating tasks like building, testing, and deploying software, we spend less time on mundane tasks and have more time to work on our actual applications instead. Solutions offered by the likes of GitLab, CircleCI, and GitHub allow us to easily create CI/CD jobs.
In general, a CI/CD job should run in a separate, isolated Docker container. This way, you can have a reproducible build environment. For example, you can use a Node.js image that is hosted on Docker Hub. Your job then runs in a container that is based on the selected image. The container has all the Node dependencies you need to build your application.
As with many things, the “less is more” motto also applies to CI/CD. After all, we don’t want to have long-running jobs. However, we don’t want to compromise on the benefits of a CI/CD platform either.
In this article, I’ll share one possible way to speed up your pipeline. For the code examples, I will use GitLab CI.
How To Reduce CI/CD Job Times With Smaller Docker Images
Usually, your CI/CD platform of choice allows you to configure jobs and pipelines using YAML files. In the case of GitLab CI, we want to have a .gitlab-ci-yml
file in our repository. The thing we’re interested in right now is the image
configuration option. The image
keyword is the name of the Docker image that the Docker executor runs to perform the CI tasks.
Our first attempt looks like this:
Let’s talk about this configuration briefly:
- We tell GitLab to use a Node image (version 12.10.0) by default.
- We run
npm ci
before each job to install the dependencies. - We have three jobs that build, test, and lint our application.


There’s nothing particularly wrong with this configuration. However, if you take a look at the logs above, you can see that it took 33 seconds to download and prepare the Node image. 33 seconds may not sound like a lot, but it can add up quickly in terms of billable minutes. The reason is that a “full” image like the one used here contains a number of additional things besides Node, like runtime libraries and version control software.
Now that we have a baseline, we can try to improve on it by using a smaller image. In the example below, I’m using an alpine-node image. Alpine Linux is much smaller than most distribution base images and thus leads to much smaller images in general. Aside from this change, the configuration remains the same as above.


We’re now down to 15 seconds for downloading and preparing the Node image! That’s less than half the time it took to set up the full Node image. This is a considerable improvement for something that only required a single changed line of code.
Frontend projects in particular are most likely to be compatible with smaller Docker images, as they rarely need anything besides Node.
Caveats of Smaller Docker Images
Sometimes, though, you need a full image. In the example below, I tried to use an alpine-node image in an Express.js project that uses the bcrypt library. In the logs, we see that our alpine-node image does not meet all the requirements to properly install the project dependencies. Hence, the job fails and we need to use an image that meets the project requirements.

Conclusion
Thanks for reading this short article about how to speed up CI/CD jobs by using smaller Docker images. As you can see, it’s easy to try a different Docker image like alpine-node
and see if it works for you. In general, it is a good idea to start with a small image. If it does not meet your requirements, you can still switch to another image if necessary.
Do you know any other tricks to speed up CI/CD jobs? Let me know in the comments.