Replicating and Load Balancing Go Applications in Docker Containers with Consul and Fabio

Exploring a simple, yet flexible implementation of registered and load-balanced application containers.

Matt Wiater
Better Programming

--

Photo by Shubham Dhage on Unsplash

My usual solution for container replication, orchestration, and load balancing is Kubernetes. But last week, a friend asked me about a project she was working on — a fairly basic replication scenario — but she doesn’t have the k8s background to make that a viable tool. Having worked a bit with Consul in the past, I knew this could be a simpler alternative to set up and illustrate the concepts behind the implementation.

The Question

What is a simple way to demonstrate distributing a Golang binary in n-number of Docker containers behind a load balancer using a discovery agent mechanism?

After some quick research, one path of the least configuration used Consul and Fabio, both running in Docker containers.

To follow along at home in answering this question, you’ll need Docker, and the code in this demonstration is available on GitHub.

The hurdles

As there is a bit of port juggling in this scenario, I want to control the dynamic nature of the infrastructure in the run stages rather than the build stages. What I do not want is to have separate application builds, or separate Docker image builds to handle different port-binding scenarios. So, my primary goals are:

  • One Golang binary, one build.
  • One Docker image, one build.

Port planning

Consul and Fabio are very good at being stable and easy to use with little configuration. However, we must develop a consistent and simple way to map dynamic ports to immutable Golang builds and Docker containers. Let’s outline our needs:

  • Our Golang binary will always listen to one predefined port, no matter how many replicas. In the example app, it is port 8000. While we can only run one instance of the app locally — as no two apps can bind to the same network port — we can utilize Docker to overcome this hurdle.
  • Our Docker image will always map to the internal port of the Golang app (8000) while exposing varied ports to the network. In effect, we might have three Docker containers from the same Docker image, each binding to different network ports but passing that traffic internally to the same port.

Here’s what that looks like:

Container01 :8001 (network-facing) -> Golang App :8000 (internal)
Container02 :8002 (network-facing) -> Golang App :8000 (internal)
Container03 :8003 (network-facing) -> Golang App :8000 (internal)
...

As we’ll see, this is trivial to do by varying our docker run commands and binding to ports dynamically.

However, there is an interesting problem at play. And it involves turtles. We need to register each instance of our Golang application in Consul from within the application. For example,

registration := &api.AgentServiceRegistration{
ID: serviceID,
Name: serviceName,
Port: servicePort,
Address: serviceIP,
Tags: serviceTags,
}

Since the Docker containers we’ve built all attach to the network with different ports, we’ve avoided the problem of binding two applications to the same port. However, it is our Golang application inside of the Docker container that is registering with Consul. So, our statically built binary needs to know about the statically built Docker container it resides within, which is binding to dynamic host ports at runtime. Yeah, it’s turtles all the way down!

As confusing — yet accurate — as that description was, it’s not difficult to solve. This is a bit of a Matryoshka doll situation, but let’s think about what we need to do:

  • Our Golang app needs to know about its immediate parent’s external network-facing port, which, in this case, is the Docker Container, and that port is variable.

Since we’re assigning these ports during the Docker run phase, e.g.: docker run -d --rm -p 8001:8000, we can pass that same port number into the container as an environment variable via: docker run -d --rm -p 8001:8000 -e DOCKERPORT=8001. By doing this, we can access this variable port number from within our Golang application via: os.Getenv(“DOCKERPORT”).

Problem solved!

The Components

Consul: application registration and health checks

The Golang application must register with Consul and provide connection information and application metadata (see the docs for the full list of Consul registration options). Below, you can see how the Golang application is making use of the DOCKERFILE environment variable mentioned above via the assignment to dockerContainerPort:


serviceID := fmt.Sprintf("hello-%v", myUUID)
dockerContainerPort, _ := strconv.Atoi(os.Getenv("DOCKERPORT"))

registration := &api.AgentServiceRegistration{
ID: serviceID,
Name: "hello-server",
Port: dockerContainerPort,
Address: ipAddress,
Tags: tags,
Check: &api.AgentServiceCheck{
HTTP: fmt.Sprintf("http://%s:%v/health", ipAddress, dockerContainerPort),
Interval: "10s",
Timeout: "30s",
},
}

For demonstration purposes, the application registers two routes:

  • /hello/api/v1: This is just a single API endpoint to return sample data in JSON: a response status code, the application name, and the unique application UUID (this is a simple indicator to include to ensure that the load balancer is doing the job we defined: round-robin balancing between all of our instances), and about 200Kb of random data — which was added in case we want to do some load testing benchmarks.
  • /health: A simple endpoint to let Consul periodically check that the server is alive.

Fabio: routing to and from the load balancer

Fabio is dead-simple to use with minimal configuration. See all the configuration options here. This demo application only needs a few overrides based on how we decided to make use of our ports (comments added for clarity):

proxy.addr = :9000                       // Port used to route to our instances
registry.consul.register.addr = :9001 // Fabio registration address for Consul
ui.addr = :9001 // Dashboard port
registry.consul.addr = 192.168.0.99:8500 // Consul registry address
proxy.strategy = rr // Round-robin load balancing strategy

Convenience Scripts

To demonstrate all of the pieces above, I’ve included a Makefile that will bootstrap and teardown the miniature test cluster. The make commands below simply execute bash scripts in the ./scripts folder of the repository. Just type make at the root of the repository:

Targets in this Makefile:

make docker-run-consul-discovery
make docker-teardown-consul-discovery
make golang-build
make golang-run

For details on these commands, see the bash scripts in the 'scripts/' directory.

Start the load-balanced cluster

The bash script that starts the cluster makes use of the .env file in the repository. While you can easily use the make commands above to explore the example, I want to take a few minutes to describe the process.

Bash script breakdown

.env file (comments added for clarity):

NUMBER_OF_INSTANCES=8                        # Number of Go app replicas to create
SERVERPORT=8000 # Golang application port
DOCKERPORT=8001 # Docker container port
DOCKERIMAGE=mattwiater/golangconsuldiscovery # Docker image name
IPADDRESS=192.168.0.99 # Local IP Address
CONSUL_HTTP_PORT=8500 # Consul dashboard port
FABIO_DASHBOARD_PORT=9001 # Fabio dashboard port
FABIO_HTTP_PORT=9000 # Fabio Load Balancer endpoint

In the Bash script, the Consul container is invoked via the following:

docker run -d --rm \
-p ${CONSUL_HTTP_PORT}:${CONSUL_HTTP_PORT} \
-p 8600:8600/udp \
--name=golangconsuldiscovery-consul \
consul agent -server -ui -node=consul -bootstrap-expect=1 -client=0.0.0.0

In the Bash script, the Fabio container is invoked via the following:

docker run -d --rm \
-p ${FABIO_HTTP_PORT}:${FABIO_HTTP_PORT} \
-p ${FABIO_DASHBOARD_PORT}:${FABIO_DASHBOARD_PORT} \
-v ./fabio.properties:/etc/fabio/fabio.properties \
--name=golangconsuldiscovery-fabiolb \
fabiolb/fabio

The Golang application containers are invoked using a command in this format:

docker run -d --rm \
-p $DYNAMIC_DOCKER_PORT:$SERVERPORT \
--name golangconsuldiscovery-hello-${INSTANCE} \
-e DOCKERPORT=${DYNAMIC_DOCKER_PORT} \
-e CONSUL_HTTP_ADDR=${IPADDRESS}:${CONSUL_HTTP_PORT} \
-e FABIO_HTTP_ADDR=${IPADDRESS}:${FABIO_HTTP_PORT} \
$DOCKERIMAGE

The command above is wrapped in a Bash loop based on the NUMBER_OF_INSTANCES environment variable, which increments the port numbers starting with the first defined DOCKERPORT for the Docker containers via the following:

DYNAMIC_DOCKER_PORT=${DOCKERPORT}
for (( INSTANCE=1; INSTANCE<=NUMBER_OF_INSTANCES; INSTANCE++ ))
do
...
((DYNAMIC_DOCKER_PORT=DYNAMIC_DOCKER_PORT+1))
done

The loop above lets you launch multiple Docker containers with unique ports containing our Golang application. Varying the docker run commands in this way has fulfilled the primary goals:

  • One Golang binary, one build.
  • One Docker image, one build.

See it in Action

Putting it all together, you can run: make docker-run-consul-discovery(output below is truncated for clarity, the full output can be seen in the repository's README):

IMAGE                             PORTS                   NAMES
mattwiater/golangconsuldiscovery 8008->8000 golangconsuldiscovery-hello-8
mattwiater/golangconsuldiscovery 8007->8000 golangconsuldiscovery-hello-7
mattwiater/golangconsuldiscovery 8006->8000 golangconsuldiscovery-hello-6
mattwiater/golangconsuldiscovery 8005->8000 golangconsuldiscovery-hello-5
mattwiater/golangconsuldiscovery 8004->8000 golangconsuldiscovery-hello-4
mattwiater/golangconsuldiscovery 8003->8000 golangconsuldiscovery-hello-3
mattwiater/golangconsuldiscovery 8002->8000 golangconsuldiscovery-hello-2
mattwiater/golangconsuldiscovery 8001->8000 golangconsuldiscovery-hello-1
fabiolb/fabio 9000->9000
9001->9001 golangconsuldiscovery-fabiolb
consul 8500->8500 golangconsuldiscovery-consul
Complete!

Dashboards may take a few seconds to become available:
Console Dashboard is avaiable: http://192.168.0.99:8500/ui/dc1/services
Fabio Dashboard is avaiable: http://192.168.0.99:9001/routes
Fabio Load Balanced Endpoint is: http://192.168.0.99:9000/hello/api/v1

Once all the containers are up and running, you’ll have access to both Consul and Fabio dashboards at the links above for basic setup information.

The consul services dashboard

On the main Services dashboard, we can see that Consul is running properly and that one instance of Fabio is registered, along with eight instances of the Golang application:

Consul: Services

The Consul hello-server service dashboard

Clicking on the hello-server service shows that all eight instances are up and passing their health checks. We can also see that Consul has registered each instance bound to a different port as defined by our Docker commands:

Consul Service List: hello-server

The Fabio load balancer routes dashboard

Fabio’s Routing Table also shows that all of our hello-server instances are grouped together with equal distribution weights.

Fabio Load Balancer: Routes

Once the Fabio load balancer is up, you can directly visit the balanced endpoint, e.g., 192.168.0.99:9000. By refreshing this page, we can see that Fabio is doing its job as it will rotate through all eight application UUIDs, round-robin style. Here’s what the code looks like:

{
Status: 200,
Application: "hello",
UUID: "40fef4db-91ee-4341-bef4-7753bdd8c3d8",
Data: "..."
}

Stop Consul cluster

make docker-teardown-consul-discovery

This will stop all of the docker containers whose name includes golangconsuldiscovery-.

Load Testing

While local load testing doesn’t simulate real-world network scenarios, comparisons in the same environment can at least illustrate performance differences — and hopefully improvements. Below are the results of testing our cluster with one, four, and eight application replicas. I used Ddosify for the tests. You can see how to set everything up in the README repository.

1 Container Replica
-------------------
Total Requests: 6,000
Request Success: 233
Request Fail: 5,767
Avg: 2,840 (ms)
Min: 7 (ms)
Max: 9,933 (ms)

4 Container Replicas
--------------------
Total Requests: 6,000
Request Success: 5,796
Request Fail: 204
Avg: 290 (ms)
Min: 5 (ms)
Max: 9,873 (ms)

8 Container Replicas
--------------------
Total Requests: 6,000
Request Success: 6,000
Request Fail: 0
Avg: 17 (ms)
Min: 5 (ms)
Max: 86 (ms)

For each test above, I sent 6,000 requests to the Fabio load balancer endpoint in one minute. With one container, we see more request failures than successes. We are closer to a 100% request success rate with four containers. In both cases, we see a Maximum response time of close to 10 seconds, at which the API server requests start backing up and timing out, causing the failures. By the time we scale to ten containers, we achieve a 100% success rate, and the Maximum response rate has plummeted from 10 seconds to 50 milliseconds!

--

--

Software Engineer, currently exploring Golang and posting articles on my findings and discoveries as I dive deeper into the language.