Better Programming

Advice for programmers.

Follow publication

How To Set Up Docker for a Small Enterprise

Loic Joachim
Better Programming
Published in
6 min readDec 14, 2022

--

The Docker whale with some containers

Environment

Most setups typically have a NAS and application server. In this guide, I use a NAS from Synology and Alpine, my preferred OS for my host server.

To see code examples, check out my GitHub repository.

The Problem With Docker and Volumes

There is a lot of conflicting information on the proper way to set up Docker. A lot of the confusion comes from Docker’s own documentation; they say things like, “Volumes are the best way to persist data in Docker.” This has proven very misleading in my quest to figure out the right way to set it up.

I can tell that it doesn’t seem like Docker has a cookie-cutter solution for backing up containers. If you try copying your data out of a Volume Bind Mount, you will have permission issues with most containers. Similarly, if you try and create an SMB or NFS mount from your NAS and store the docker files in there, you will run into permission issues (because the file’s owner needs to be the user that is used within each container, and that user will likely not exist on the host environment or your NAS).

Once you restore your backups, your container won’t be able to use the files as the user, and permissions will have changed on the restored files. If you use volumes managed by Docker, changing the files within them is difficult. Volumes are just meant for storing data the container creates and uses. You are not meant to interact with this data directly.

The Solution

The reality is that when it comes to managing and backing up data for each container, the answer will change depending on the container. I will list some of the containers I use and how I set them up.

Portainer

If you don’t know Portainer, you should probably use it. It is a super handy tool to check up on your containers and perform basic tasks when you can’t be bothered remembering the command line options. The installation instructions are here. No backups required.

My production Portainer container view

Plex, Sonarr, Jackett (and Download Station)

I use Plex as my media centre, and Sonarr and Jacket to make sure I always have the latest episodes for my TV shows. This media data is relatively unimportant to me. I don’t care about backing it up. However, I have a lot of it, so it is impractical to store it anywhere but on my NAS.

These three containers, therefore, need to access a shared folder on my Synology NAS. In the past, I used the SMB protocol (apk add samba-client), but while the speed was adequate, I found it too unreliable. I would, from time to time, have the containers suddenly lose certain permissions over the mount and could no longer delete files or sometimes write at all. I ended up settling on NFS, and it works better between Unix systems, as it was designed for Linux and solves many of the permission issues.

To set it up, I followed the instructions from Synology for the server side. Then I followed these instructions to mount the drive in Alpine.

1. Install nfs-utils package
$ sudo apk add nfs-utils
$ sudo rc-update add nfsmount
$ sudo rc-service nfsmount start

2.Mount NFS with mount.nfs
$ NFS_SERVER=drive.mydomain.nz
$ NFS_DIR=/volume1/Media
$ sudo mount -t nfs ${NFS_SERVER}:${NFS_DIR} /mnt/Media

3.Mount NFS on boot
$ echo "${NFS_SERVER}:${NFS_DIR} /mnt/drive nfs _netdev 0 0" | \
sudo tee -a /etc/fstab

4.If needed unmount
$ umount -f -l /mnt/Media.

You can find my Docker Run code here. (I need to convert this to Docker Compose but haven’t gotten around to it yet): /Docker/containers/plex/

The only problem I had after setting this up was that on boot, the containers would start up before the NFS drive was mounted, causing all the containers to error out until I restarted them. To fix this, I told the docker service to start after the NFS service by adding this configuration to the end of the docker service config file: /etc/conf.d/docker

# Command added by admin to make Docker start after network drive has been mounted
rc_need="nfsmount"

GitLab

GitLab is easy enough to set up. Its documentation is the best I’ve ever seen for a Docker container. To back up Gitlab, you have to run a command inside the container, which creates a backup file.

However, they skimp on a couple of important files (for security reasons, which I have chosen to ignore). Anyway, I created a batch script in /Docker/containers/gitlab you can automatically execute using crontab -e that automates the process and copies everything to your mounted drive.

MsSQL — Microsoft SQL

In /Docker/containers/mssql, you can find the script that must be run as a cronjob on the host OS. This script runs a command inside the MsSQL container to create the backup then the script copies the backup from your bind mount to your NAS.

MySQL

For this script (in /Docker/containers/mysql), I opted to use the MySQL dump utility, which I felt was more versatile. It allows me to run the script from my Synology NAS and remotely connect to the MySQL server to dump the databases.

Ouroboros — Container Updater

This is just a good tool to have. It updates containers to the latest versions.

docker run -d --name ouroboros \
-v /var/run/docker.sock:/var/run/docker.sock \
-e LATEST=false \
-e SELF_UPDATE=true \
-e MONITOR="gitlab portainer sqlserver mysql knowledge docker_web_1 nodejs-internal" \
-e CLEANUP=true \
--restart unless-stopped \
pyouroboros/ouroboros

Knowledge Base

docker pull koda/docker-knowledge
mkdir /var/lib/knowledge
chmod a+w /var/lib/knowledge

docker run -d \
-p 8085:8080 \
-v /var/lib/knowledge:/root/.knowledge \
--restart unless-stopped \
--name knowledge \
koda/docker-knowledge

Node.js

In my case, there isn’t much to back up in Node because my data is all stored in a database. I don’t have any files Node creates that I want to keep. If I make changes to my Node code, I copy them into the bind mount folder using WinSCP and then use the command line like this:

$ cd /var/lib/nodejs
$ docker-compose down
$ docker-compose up -d

NginX

This is one of the most important containers for using Docker; it allows you to have multiple websites hosted on one machine. As I prefer Node.js over PHP, I don’t use its PHP capabilities. I have it host static websites and act as an application proxy. It looks at which hostname people were looking for when they got directed to the server and the traffic to the appropriate container. Often, I have over 6 or 7 websites on a single Alpine host with no issues, thanks to this.

Backing up Your Synology

After you have backed up all your Docker files to Synology, find a good cloud storage provider, set them up in Hyper Backup, and configure a backup task to regularly upload everything to the cloud.

--

--

Loic Joachim
Loic Joachim

Written by Loic Joachim

Technical sysadmin and front end developer, specialised in IT management.

Write a response