Dockerizing a Go App — and Optimising Its Image Size
The multi-stage build has a ~51x improvement over the single-stage build

What is Docker?
Docker is an open-source tool that generates and runs portable, self-sufficient containers for your backend service or app. The containers can be run on anything that supports Docker — cloud services, on-prem, or even your local development machine.
The developer (you) gets to define the environment and dependencies available for the program in a repeatable, infrastructure-as-code fashion.
The build process is defined using a Dockerfile
, and setup instructions can be found here.
Why Use Docker?
Docker has a great explainer, but it boils down to developer efficiency. By defining the exact environment that the code runs in everyone can be on the same page, whether they are running locally in dev-mode, debugging a production outage, or exploring multi-cloud deployment. No more spending days setting up a development environment, and the “works fine on my machine” tension disappears. Docker is not the only tool that offers these benefits, but it’s certainly popular and has a large community around it.
Our First Container
We’ll focus on containerizing a Chuck Norris joke server that I have previously written about.
We’ll start by building from Alpine Linux, which is a lightweight distro that has a package manager and other tools we might want when building code. In this case, we’ll use one that is specific to Golang 1.17.
Next, we’ll use our package manager, apk
, to install whatever dependencies we need to build our app. In this case, we only need ca-certificates
to provide a secure connection between client and server.
Next, we’ll set up a working directory to build our app in. From this point on in the Dockerfile, we’ll be working in this directory. Here we copy everything from our local directory into the working directory in the Docker image, but if there are files that aren’t required to build the image you can be more selective.
Copy
is of the form Copy <source location> <destination location>
, so we are saying “copy the things in this directory on the local machine to the working directory in the image that we are building.”
Next, we’ll build our code. For Go, we compile and build a binary from the source material.
This will look very familiar if you’ve built many Go apps, but if not it’s essentially saying “Use this code to build a binary called joke-web-server
, and place that binary ‘here’. This binary can be statically linked (the binary won’t rely on libraries from the operating system), and we intend to run it on linux.”
We’ll finish our Dockerfile
by documenting the port that the app listens on (inside the container), and we’ll define how to run the app when the resulting image is run.
EXPOSE
doesn’t actually do much, it’s essentially documentation that the app is listening on port 5000 inside of Docker. The app code listens on port 5000, so we are documenting that here. We will set up the port on the host machine when we run the image.
ENTRYPOINT
is the command to run when the image is ran. So in this case, it’s running the binary, located in WORKDIR
, called joke-web-server
.
Building and Running Our Image
We can build our image by running:
docker build -t joke-web-server .
The command is of the form docker build [OPTIONS] PATH
, so we are saying “Use the Dockerfile (and other things) located in the current directory to build an image called ‘joke-web-server’”. After the process completes, we can see that our image was built by listing Docker images with docker image ls
:

To run the image, we need to link the Docker port (loosely defined by our previous EXPOSE
directive) to the port on our host machine. This is a command line option of the form p <local machine port>:<docker port>
. So if we run our image as:
docker run -p 5001:5000 joke-web-server
we are telling Docker to link internal port 5000 (where our app is listening) to local port 5001 (the port you’d use to access the app outside of Docker).
The old, reliable ctrl-c
can be used to stop the server when you’re done with it.
And there we have it! This general formula can be applied to run a variety of programs, built-in Go or otherwise.
But Can We Do Better?
The above resulted in a 370MB image — not great for such a small app. If I do a go build
on my development machine (M1 Max MacBook Pro), I end up with a 6.7MB image. What’s going on?
Each instruction in our Dockerfile adds a layer to the image, which can be thought of as a diff
to the previous state. As we build we’re carrying a lot of artifacts required to build the app forward with us into the final container. The OS we’ve chosen for the image is also there and likely has things we don’t need, and any packages we’ve installed are included too. Even the files used to build our binary are still hanging around! All of this increases the size of the final result, not to mention adding to the potential attack surface if/when we deploy it out in the real world.
We could focus on cleaning up all the unnecessary bits as we go along, but that adds significant complexity (and opportunities for bugs or security issues down the line). Enter Docker multi-stage builds.
Multi-stage builds are basically a pipeline where we construct a series of images to build various output artifacts. These artifacts are then shuffled to the next “stage” of the build.
But! We don’t need to keep the bits that were required to create that artifact anymore. We keep doing this until we arrive at a final container that contains the minimum set of dependencies to deploy our app. Very often, this manifests as a two-stage build: one to build the app’s binary, and one to store the app (and any dependencies) for deployment.
For our example above, that looks like the following:
Stage 1: Build
Note the line FROM Golang:1.17-alpine as build-stage
— we are calling this first stage build-stage
, and in the next step we can pull artifacts from its filesystem.
Stage 2: Build our Final Image
Here, we are copying the ca-certificates
dependency and the app binary that we created in build-stage
. We are leaving behind all the bits that we used to build the binary (and its dependencies).
Because our app binary is statically linked, we were able to build FROM scratch
for a minimalistic resulting Docker image. You can read more about the scratch
keyword here.
We now have a 7.2MB deployable image — a ~51x improvement over the single-stage build, and only a ~7% increase over building the raw binary for my machine. This now seems like a pretty good tradeoff for a portable, easily-deployable app.
You can run this image exactly as you ran the single-stage image, but now it is much smaller with very little relative attack surface.
Wrapping Up
Docker is a large project with many, many possible configurations that will be specific to your app, but for me getting started was the hardest part. I hope that this can serve as a good springboard, and would love any feedback on how to improve!