Building and Deploying a Simple App to Kubernetes Using “werf”
Know how to use the open source tool

This article looks at building a Docker image of a minimalistic application and deploying it to a Kubernetes cluster using the Open Source tool called werf. I will also show how you can further deliver your changes in your app’s code and the infrastructure where it is run.
I will use a small echo server based on a shell script as an example application. This server returns the string Hello, werfer!
in response to the /ping
endpoint request.
NB. You can explore all files of this minimalistic application and download them from this repository.
The Kubernetes cluster we will use for this article is based on minikube, so you don’t need any specific hardware to follow the instructions: your regular desktop/laptop will fit.
werf: Short Intro
For those new to this CLI utility, werf implements the complete application delivery workflow in Kubernetes. It uses Git as a single source of application code and configuration:
- Each commit reflects a particular application state;
werf
synchronizes it with the container registry (by building the missing layers of final Docker images) and the application run in Kubernetes (by re-deploying the resources that have been changed);werf
also cleans up obsolete artifacts in the container registry using a unique algorithm based on Git history and user-defined policies.
The distinctive feature of werf is the integration of many well-known tools for developers and DevOps/SRE engineers, such as Git, Docker, container registry, CI system, Helm, and Kubernetes. These components are combined to ensure the opinionated CI/CD workflow to deliver your apps to Kubernetes. Bringing them together minimizes the efforts needed to implement CI/CD.
Let’s see it in action.
Preparing the System
Before starting, install the latest stable version of werf (v1.2 from the stable release channel) on your system (refer to the official documentation).
All commands and actions given in the article apply to the Linux operating system (tested on Ubuntu 20.04.03). While the commands are generally the same for other systems such as Windows and macOS, slight variations may exist. If you struggle with any specific instructions for your OS, please check the links at the end of this article.
Building an Image
First, we have to create the application itself. Let’s create a working directory (in our case, it is the app directory in the user’s home directory):
mkdir app
Create a hello.sh script in the directory with the following contents:
Initialize a new Git repository in the directory and commit the first changes — the script we’ve just created:
cd ~/app
git init
git add .
git commit -m initial
Since our application will be built and run in Docker, let’s also create a Dockerfile with instructions for building an application image:
For werf to use the Dockerfile for building, we need to create a werf.yaml
configuration file in the project root with the Dockerfile description:
A repository with the files created up to this point is available in this directory of the werf/first-steps-example repository.
Now, we are ready to build our application. Note that you need to commit all the changes to the project repository (the Dockerfile, etc.) before building, i.e. run the following commands first:
git add .
git commit -m FIRST
Start the build using the command below:
werf build
You should see the following output:
To check if the build was successful, run the application with:
werf run app --docker-options="-ti --rm -p 8000:8000" -- /app/hello.sh
Let’s take a closer look at the above command. The —-docker-options
option specifies a set of Docker-related parameters, while the command to execute in the container (at the end) is preceded by two hyphens.
Let’s check that everything is up and running as intended. To do this, go to http://127.0.0.1:8000/ping in your browser or use the following CURL request in another terminal:
curl http://127.0.0.1:8000/ping
You should see the “Hello, werfer!” greeting. In addition, the following message should appear in the logs of the running container:
GET /ping HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: curl/7.68.0
Accept: */*
Preparing for the Deployment
Building an app is half the problem (or even a third). After all, you still have to deploy it to production servers. To do this, let’s create a local “production” Kubernetes cluster and configure werf to use it. Here’s a list of steps to take:
- install and run minikube, a minimal Kubernetes distribution (it is ideal for testing purposes);
- install the NGINX Ingress Controller, a cluster component responsible for traffic routing;
- edit the
/etc/hosts
file to enable cluster access using the application domain name; - log in to the Docker Hub and set up the secret with the required credentials;
- deploy the application to Kubernetes.
1. Installing and running minikube
First, install minikube as described in the official documentation. If you already have it installed, make sure that your version is the latest one.
Let’s fire up a Kubernetes cluster using minikube:
# Delete the existing minikube cluster (if there is one).
minikube delete
# Start a new minikube cluster.
minikube start --driver=docker
Set the default Kubernetes namespace so that you don’t have to enter it each time you use kubectl (note that we only configure the default name and not create the namespace itself — we will do that later):
kubectl config set-context minikube --namespace=werf-first-app
If you do not have kubectl installed, there are two ways to install it:
- Install it manually using the official documentation;
- Use the kubectl binary that comes with minikube. To do this, run the following commands:
alias kubectl="minikube kubectl --"
echo 'alias kubectl="minikube kubectl --"' >> ~/.bash_aliases
If you choose the second option, the utility will be downloaded and installed the first time you invoke kubectl using the alias above.
Let’s check if kubectl works by listing all the Pods running in the newly created cluster:
kubectl get --all-namespaces pod
A Pod is an ephemeral Kubernetes entity that hosts one or more application containers and resources shared between those containers.
Running this command should produce an output similar to the one below:
Look closely at the READY
and STATUS
columns. If all Pods have a Running
status and the numbers in the Ready column are 1/1 (note that the number on the left must be equal to the number on the right), then our cluster is ready to use. If you do not see the output similar to the one above, try waiting a little longer and rerunning the above command (probably, some Pods have not had time to start).
2. Installing NGINX Ingress Controller
The next step is to install and configure the NGINX Ingress controller. It will route external HTTP requests to our cluster.
Use the following command to install it:
minikube addons enable ingress
This process can take some time, depending on your PC’s performance. For example, it took my machine about four minutes to install this add-on.
Once the process is complete, you should see the following success message:
The 'ingress' addon is enabled
Wait for the add-on to start and check if it works:
kubectl -n ingress-nginx get pod
You should see the output similar to the one below:
The last line is what interests us. The Running
status means that everything is fine, and the controller is running.
3. Edit the hosts file
The last step in setting up the environment is to edit the hosts
file so that all requests to the test domain end up in the local cluster.
In our case, we will use the werf-first-app.test
address. Run the minikube ip
command in the terminal to see if it outputs a valid IP address. If the output does not look like a valid IP address (192.168.49.2
in my case), go back and reinstall the minikube cluster.
Next, run the following command:
echo "$(minikube ip) werf-first-app.test" | sudo tee -a /etc/hosts
You can check whether the above command was successful by viewing the hosts file. There should be a line like this: 192.168.49.2 werf-first-app.test
.
Now, let’s see if everything works as expected. To do this, we will send a CURL request to the application endpoint:
curl http://werf-first-app.test/ping
In this case, the NGINX Ingress Controller should return a 404 page indicating that the endpoint is not yet available:
4. Logging in to Docker Hub
Now, we need to set up a repository for the built images. We suggest using a private Docker Hub repository. For convenience, we will use the application name (werf-first-app
) as the repository name.
Log in to Docker Hub by running the following command:
docker login
Username: <DOCKER HUB USERNAME>
Password: <DOCKER HUB PASSWORD>
You should see the Login Succeeded
message.
5. Creating a Secret for registry access
To use the private registry to store images, you must create a Secret with registry login credentials. Note that the Secret must be located in the same namespace as the application.
Therefore, you need to create a namespace for the application beforehand:
kubectl create namespace werf-first-app
You should see the message that the new namespace has been created (namespace/werf-first-app created
).
Next, create a Secret named registrysecret
:
kubectl create secret docker-registry registrysecret \
--docker-server='https://index.docker.io/v1/' \
--docker-username='<DOCKER HUB USERNAME>' \
--docker-password='<DOCKER HUB PASSWORD>'
If successful, you should see a message secret/registrysecret created
. If you made a mistake when creating the secret, delete it with the kubectl delete secret registrysecret
command and recreate it.
Note that the method described above is a standard way to create Secrets in Kubernetes.
This concludes the preparation of the environment for deploying the application to the cluster.
We will use the Secret created above to pull application images from the registry by specifying it in the imagePullSecrets
field when setting up Pods.
Deploying the Application to the Cluster
Before deploying the application, we have to create Kubernetes manifests that define the resources we need. We will use the Helm chart format for this purpose. Helm charts (or Helm packages) contain all the resource definitions required for running an application or service in a Kubernetes cluster.
We’ll need three K8s resources for our application. While Deployment is responsible for running the app in containers, Ingress and Service route external and internal traffic in the cluster, respectively.
We end up with the following file structure:
We will put the manifests mentioned above in the templates
subdirectory of the hidden .helm
directory.
Note: you have to add the directory with manifests to the .dockerignore
file to exclude these files from the context of the Docker image build:
/.helm/
Let’s take a closer look at our resource manifests.
1. Deployment
The Deployment resource creates a set of Pods for running the application. It looks like this:
Here, the {{ .Values.werf.image.app }}
template variable is used to insert the full name of the application Docker image. Note that you must use the same component name that was used in werf.yaml
(app in our case).
werf automatically inserts the full names of the images to be built and other service values to Helm chart values (.Values
). You can access them using the werf
key.
werf only rebuilds images when the added files are changed (those used in the Dockerfile COPY/ADD
instructions) or if werf.yaml
itself is changed. A rebuild causes the image tag to change, which automatically leads to the Deployment update. If there are no changes to these files, the application image and its associated Deployment will remain unchanged, meaning that the application’s state in the cluster is up to date.
2. Service
The Service resource allows other applications in the cluster to connect to our application. It looks like this:
3. Ingress
Unlike the previous resource, Service Ingress opens up access to our application from outside the cluster. Its purpose is to redirect traffic to the werf-first-app.test
public domain to our Kubernetes Service. It looks like this:
Deploying the app
Let’s commit our configuration changes (the K8s resources required to deploy the application) to Git:
git add .
git commit -m FIRST
A repository with the files created up to this point is available in this directory of the werf/first-steps-example repository.
Start the deployment process with the following command:
werf converge --repo <DOCKER HUB USERNAME>/werf-first-app
Let’s see if the process was successful:
Run again:
curl http://werf-first-app.test/ping
You should see the following response:
Hello, werfer!
Congratulations, you have successfully deployed the application to the Kubernetes cluster!
Making Changes to the Application
Let’s try to modify our application and see how werf rebuilds and re-deploys it into the cluster.
Scaling
Our web-server runs as part of the web-first-app Deployment. Let’s see how many replicas are running:
Currently, we have just one running replica (the one that starts with werf-first-app). Increase their number to four:
kubectl edit deployment werf-first-app
A text editor will open with the contents of the manifest file. Find the spec.replicas
line and change the number of replicas to four: spec.replicas=4
. Wait a bit and check the number of running app replicas:
In this case, we have manually increased the number of replicas in the cluster by directly editing the manifest and bypassing Git. Now, run the werf converge
command:
werf converge --repo <DOCKER HUB USERNAME>/werf-first-app
Check the number of replicas once again:
As you can see, the number of running replicas corresponds to the one specified in the manifest stored in Git (we did not edit it). This is because werf has reverted the cluster state back to the one described in the current Git commit. This mechanism is called Giterminism (Git + determinism).
To respect this principle and do everything correctly, you need to change the number of replicas in the project files in the repository. So, let’s edit the deployment.yaml
file and commit the changes to the repository:
Commit the changes and rebuild the application using the following command:
werf converge —-repo <DOCKER HUB USERNAME>/werf-first-app
Now, let’s check the number of replicas again:
As you can see, there are four replicas. Let’s decrease their number back to one. For this, edit the deployment.yaml
file, commit the changes, and redeploy the application via the werf converge
command.
Changing the code
Currently, our application responds with Hello, werfer!
Let’s change the answer and redeploy the updated application to the cluster. Open hello.sh
in the editor and replace the existing line with something else (e.g., Say hello one more time!
):
Now, commit the changes and run werf converge
. What do we end up with?
curl http://werf-first-app.test/ping
Say hello one more time!
Congratulations, everything is fine and runs as expected!
Takeaways
In this article, we built and deployed the basic application to the Kubernetes cluster using werf. I hope it will help you get acquainted with werf and gain some experience with deploying applications to K8s.
The article is based on the online self-study guide’s First steps chapter. Making it as concise as possible, I chose not to dive into theoretical issues covered in the full guide, such as Kubernetes templates and manifests, essential K8s resources for running applications (Deployment, Service, Ingress), werf operating modes and Giterminism, peculiarities of using Helm in werf, etc. You can learn more about them in the abovementioned guide. More specific instructions, including those for other operating systems, are also available there.
Any questions and suggestions are welcome in the comments to the article or in the werf_io Telegram chat.
Resources
- werf.io — Official website of the werf utility;
- Giterminism — About giterminism, the principle used by the utility;
- GitHub — Source code repository.