Dockerize a React App and an Express API With MongoDB

A simple guide on how to move your React app, Express API, and MongoDB to Docker using containers

Mac Rusek
Better Programming

--

Image by Mohamed Hassan on PxHere

For sake of simplicity, I assume you have a working front end and back end — as well as connected database.

The best idea is to have both the API and client repos in one folder. You can have one remote repo with both of them or use two separate remote repos and then combine them with parent remote using Git submodules. That’s how I did that.

Parent repo folder tree

React App

I used Create React App (CRA) with TypeScript for my project. It was a simple blog with a couple views.

The first thing to do is to create a Dockerfile in the client root folder. To do that, just type:

$ touch Dockerfile

Open the file, and let’s fill it out. I’m using TypeScript with my CRA, so first I have to build my application. Then, I take what I get and host it all as static files. To achieve that, we’ll go with a two-stage Docker build.

The first stage is using Node to build the app. I use the Alpine version — as it’s the lightest, so our container will be tiny.

FROM node:12-alpine as builderWORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build

That’s how the beginning of the Dockerfile looks like. We’re using node:12-alpine as builder, then setting up a working directory to /app. That’s going to create a new folder in our container. We copy our package.json to a new folder in the container and install all of the packages. Next, we copy everything from the /services/client folder and paste it into our container. The last bit of that step is to build everything.

Now we have to host our freshly created build. To do that, we’re going to use NGINX — again in the Alpine version to cut on size.

FROM nginx:1.16.0-alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

We copy the build from the previous step and paste it into the nginx folder. Then expose port 80 — that’s going to be the port on which our container will be listening for connections. The last line is to start NGINX.

That’s all for the client part. The whole Dockerfile should look like this:

FROM node:12-alpine as buildWORKDIR /app
COPY package.json /app/package.json
RUN npm install --only=prod
COPY . /app
RUN npm run build
FROM nginx:1.16.0-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Express API

The API is quite simple as well — with RESTful routing to create posts, handle the authorization, etc. Lets start with creating a Dockerfiler in the API root folder in the same way as we did in the previous part.

I used ES6 features, so I have to compile everything to Vanilla JS to run it, and I went with Babel. As you can guess, that’s going to be a two-stage build again.

FROM node:12-alpine as builderWORKDIR /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install
COPY . /app
RUN npm run build

It’s very similar to the client’s Dockerfile, so I won’t be explaining it again. There’s one difference, though.

RUN apk --no-cache add --virtual builds-deps build-base python

I used bcrypt to hash my passwords before saving them to the database. It’s a very popular package, but it has some problems when using Alpine images. You might find errors similar to:

node-pre-gyp WARN Pre-built binaries not found for bcrypt@3.0.8 and node@12.16.1 (node-v72 ABI, musl) (falling back to source compile with node-gyp)npm ERR! Failed at the bcrypt@3.0.8 install script.

It’s a well-known problem, and the solution is to install additional packages and Python before installing npm packages.

The next stage, similarly as for the client, is to take the build API and run it with Node.

FROM node:12-alpine
WORKDIR /app
COPY --from=builder /app/dist /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install --only=prod
EXPOSE 8080
USER node
CMD ["node", "index.js"]

One exception is to install only production packages. We don’t need Babel anymore — as everything was complied in step one. Then we expose port 8080 to listen to requests and start Node.

The whole Dockerfile should looks like this:

FROM node:12-alpine as builderWORKDIR /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install
COPY . /app
RUN npm run build
FROM node:12-alpine
WORKDIR /app
COPY --from=builder /app/dist /app
COPY package.json /app/package.json
RUN apk --no-cache add --virtual builds-deps build-base python
RUN npm install --only=prod
EXPOSE 8080
USER node
CMD ["node", "index.js"]

Docker Compose

The last step is to combine the API and client containers with the MongoDB container. To do that, we use a docker-compose file that’s placed in our parent repo root directory — as it has to get access to both the client and the API’s Dockerfiles.

Let’s create the docker-compose file:

$ touch docker-compose.yml

We should end up with a file structure like the one below.

Parent repo folder tree

Fill in the docker-compose file with the following code, and I’ll explain it afterward.

version: "3"services:
api:
build: ./services/api
ports:
- "8080:8080"
depends_on:
- db
container_name: blog-api
client:
build: ./services/client
ports:
- "80:80"
container_name: blog-client
db:
image: mongo
ports:
- "27017:27017"
container_name: blog-db

It’s really as simple as that. We have three services: the client, the API, and MongoDB. There’s no Dockerfile for MongoDB — Docker will download images from its hub and create a container out of them. That means our database it perishable, but for the beginning it’s enough.

In the API and client we have a build key, which points to the Dockerfile locations for both services respectively (the root folder). The port container’s assigned in the Dockerfile to go to our docker-compose network port so the containers can talk to each other. The API service also has depends_on key. This tells Docker to wait with starting it until the database container is fully running. Because of that, we’re going to avoid connection errors from the API container.

One more bit for MongoDB: In our codebase for the back end, we have to update MongoDB connection string. Usually, we point to the localhost:

mongodb://localhost:27017/blog

But with docker-compose, it has to point to a container name:

mongodb://blog-db:27017/blog

The final touch is to run everything with the following command in the parent repo root directory (where the docker-compose.yml is):

$ docker-compose up

That’s all. More reading than coding, I guess. Thanks for staying till the end!

--

--

Software Engineer living and working in London, UK. I’m writing posts as my notes, hopefully, they’ll help someone else as well.