Relating With Docker and Kubernetes As Developers — An Analogue

I couldn’t find a fun K8s crash course anywhere, so I made one

Sampriti Mitra
Better Programming

--

People in meeting
Photo by The Climate Reality Project on Unsplash

To all the developers out there who haven’t had a chance to but wish to get a good idea of K(ubernete)(8)s, and/or want to get their hands dirty with it.

As a fresher in the software industry, Kubernetes was foreign to me. It was something that did not concern the software development I did, directly, or so I thought. One year into the industry, I had a rudimentary understanding of what it was, but I still hadn’t had much interaction with it. Of course, I knew in theory, the basics of Kubernetes, but somehow all the articles I read were too incomprehensible, the courses too long.

Having had more interaction with using Kubernetes, now I find it is remarkably comparable to my experience as a developer. I couldn’t find an engaging, relatable course anywhere, so I made one.

And the Journey Begins!

This is a tale of an adventurous fresher on her first day at work. This is also an article on K8s, so bear with me.

Applications: The developers who provide a predefined service to stakeholders

An application program (app or application for short) is a computer program designed to carry out a specific task typically to be used by end users.

In a way, they relate with devs who carry out tasks to develop software to meet the needs of an end-user.

How Do We Make Every Developer Independent and Effective?

Containers: The portable workspace kit for developers

Containers are a form of virtualization of operating systems. They are software executable units in which application code, together with its dependencies and configurations, is packed in such a manner that it may be run on any environment.

They provide isolation of processes and optimise resource utilisation. They do not contain operating system images. As a result, they are lighter and more portable with much less overhead.

Think of containers like the work machines/laptops that can enable employees to be independent and deliver services effectively. They are portable, enable the employee to work from home or office, and have all the resources the employee needs to complete the tasks. They are also isolated, so the employee is not influenced by what others run on their machines.

It Starts With One Thing… the Onboarding Guide

Images: The onboarding guide/blueprint for using the workspace kit

To understand how Kubernetes works, we must familiarise ourselves with images.

These images are a set of executable commands to build a container. They are immutable files, which is a snapshot of a container, making them guides or templates for containers.

Images are layers of other images, each of which is derived from the preceding layer but differs in some way. When you start an image, you’re actually running a container of it. There can be many running containers of the same image.

Images can be related to the guide or manual which one uses for getting the work machine started. Or, think of it as the blueprint for creating a work machine in the first place :).

Getting Started With Docker Images and Containers

Docker is a well-known runtime environment for creating and building applications within containers. It deploys containerized applications or software in different environments, from development to testing and production, using Docker images.

Creating a sample Go application

Let’s create a basic Go application that we will use to deploy on our minikube cluster.

Create a directory my-app which contains the below main.go file. From the command line, run the following:

go mod init 
go mod tidy
main.go

Creating a Dockerfile for the Go application

To create the dockerfile for our application, we will refer to this:

Dockerfile

Building the docker image and running the container

For building the docker image, run the following command in the terminal:

docker build --tag myapp .
docker images

You should be able to see the newly created myapp image in the list.

Now let us try running the image as a container. Since containers run in isolation, we need to expose the port inside our container to our host port.

To publish a port for our container, we’ll use the --publish flag (-p for short) on the docker run command. The format of the --publish command is [host_port]:[container_port]. So if we wanted to expose port 8080 inside the container to port 3000 outside the container, we would pass 3000:8080 to the --publish flag.

docker run -d -p 8080:10000 myapp

Open localhost:8080 in the terminal; it should say

Welcome to myapp!

Now that we have an idea about docker containers and the templates from which they are generated, i.e., images. Let’s have a look at container schedulers, which is what K8s is.

Why Do We Require Container Schedulers? Why K8s Over Containers/Docker?

We saw how useful containers can be for running applications in an isolated way, on any environment. However, in production, hundreds to thousands of different containers may be required.

Container runtime systems like Docker benefit from the usage of additional technologies to orchestrate or manage all of the containers in use.

How Do We Guide All Developers To Work in Sync To Provide the Final Product?

Kubernetes: The engineering manager

Kubernetes is an open source container management and deployment platform. It orchestrates virtual machine clusters and schedules containers to operate on those virtual machines depending on their available computational resources and the container’s resource needs. They can be related with managers, who plan and coordinate developers to deliver projects and make sure they have all the resources required to do so.

We can use K8s to deploy our services, roll out new releases without downtime, and scale (or descale) those services. It is portable, extensible, self-healing, and highly available.

What are pods?

A Kubernetes pod is a group of one or more containers, tied together for the purposes of administration and networking.

In our workspace analogy, pods can be a pair of senior and junior dev, working together on their related tasks. It can also be a dev who is ramped up on the system and is comfortable with working on the tasks alone.

Nodes and Clusters: Workstation that provides resources to the devs

Consider the example of the employee workspace above. Every employee would require resources like a monitor, chair, desk, etc., to function properly. Every employee needs to be assigned these resources by the manager.

A node can be thought of as the workstation that provides these resources and kubernetes, the manager that assigns these stations to the employees.

In Kubernetes, a Node is a worker computer that can be either virtual or physical. A Node can have many pods, and Kubernetes handles pod scheduling among the cluster’s Nodes automatically. Node is the smallest unit for computing. To establish a cluster, nodes share their resources.

Getting Started With Kubectl and Minikube

Kubectl is Kubernetes’ CLI used to interact with Kubernetes API to create and manage clusters. To install kubectl, run the following:

brew install kubectl

What is Minikube?

Minikube is a tool that lets you try out Kubernetes locally. It is a single node K8s cluster on your local machine for purpose of development or trying out K8s. The Minikube tool includes a set of built-in add-ons that can be enabled, disabled and opened in the local Kubernetes environment.

For installing minikube and starting it on MacOS, run the following commands:

brew install minikube
minikube start

Deploying Your Service to Kubernetes

First, push your local docker image that you built into minikube cache by using the following command:

minikube cache add myapp:latest

Deployment: The team goals and structure defined at the start of the year

A deployment provides declarative updates for Pods and ReplicaSets.

In a Deployment, we define a desired state, and the Deployment Controller gradually converts the current state to the desired state. Deployments may be used to build new ReplicaSets or to delete current Deployments and replace them with new Deployments.

Take a look at the deployment file below:

deployment.yaml

The above deployment file is like a declarative template for pods and replicasets. The above deployment named myapp in {metadata.name}, creates a replicaset to bring up two pods of myapp. Now the {spec.containers.image} gives the image to be pulled to run the container in the pod given by {template.metadata.labels} app:myapp. The deployment knows which pods to manage by the {spec.selector.matchlabels} app:myapp.

What are ReplicaSets?

The goal of a ReplicaSet is to keep a consistent set of replica Pods operating at all times. As a result, it’s frequently used to ensure the availability of a certain number of identical Pods.

Common deployment strategies

Recreate: all existing pods are terminated and new pods are then generated

Rolling: pods are created in a rolling fashion, ramped up slowly until all new pods are running

Creating a K8s deployment

kubectl apply -f deployment.yaml

Checking the minikube cluster for running pods

kubectl get pods

Now try going into the pod and checking whether the app is running:

kubectl exec -it <pod-name> sh
apk update
apk add curl
curl localhost:10000

Checking the self-healing capacity of Kubernetes

kubectl delete pod <pod-name>
kubectl get pods

We should be able to see two pods still running. Since we deleted a pod, the replicaset controller detected it and spun up another one to keep up the desired state of two pods.

How Do External Teams Talk to Each Other?

Kubernetes services: The team SPOC that routes relevant external communication to devs

Pods are ephemeral resources. Deployments can dynamically generate and destroy pods. Because pods are unstable, transient, and volatile, we can’t trust that the application will always be reachable via a pod’s IP.

We’ll need a permanent address that will route requests to whatever pod is active at the time.

In Kubernetes, a service is an abstraction which defines a logical set of pods and a policy by which to access them. The set of pods targeted by a service is usually determined by a selector. Kubernetes services provide addresses through which associated pods can be accessed.

Take a look at the service.yaml file below:

service.yaml

This specifies a service that targets the port 10000 of pods with the label app:myapp.

Creating a Kubernetes Service

kubectl apply -f service.yaml

The minikube tunnel command can be used to expose LoadBalancer services. To keep the LoadBalancer active, it must be run in a separate terminal window.

minikube tunnel 

minikube tunnel runs as a process on the host and creates a network route to the cluster’s service CIDR using the cluster’s IP address as a gateway. The tunnel command gives any application on the host operating system immediate access to the external IP address.

kubectl get service myapp

You should be able to see the external ip; earlier it would have been pending.

Now you can use this ip to open the service in the browser:

minikube service --url myapp

You should be able to view this in your browser.

Welcome to myapp!

The section about services is incomplete without an analogy from the dev life. Imagine an external team is unsure or confused about how to use a feature developed by the dev team. One way to resolve the problem is to contact a known developer directly. Of course, that would work, but what in case the developer has moved to a different team and lost context? In this case, the team SPOC comes to the rescue to help route the query within the team.

It’s Been a Long Article… and I Hope To Tell You More About It Next Time!

With that, we come to the end of the article on K8s and the adventures of the fresher on the first day. However there is more to be learned, and more adventures to follow, so stay tuned!

References

--

--