How Are You Structuring Your Go Microservices?

Here’s my proposed solution based on a real-world project

Tai Vong
Better Programming
Published in
7 min readJun 15, 2022

--

The term microservices is undoubtedly the only trend that rocketing these days. Many companies have transformed their big monolith hard-to-change into many smaller moving parts named microservices. For that purpose, Golang is definitely one of the most viable choices at this time. It can help developers quickly build up tiny but powerful services.

Back in the day, we used to focus on how to architect a well-managed application with many abstract layers. These days, we can split them into many tiny deployable units. Many teams can work on many smaller projects, and any individual can contribute and grow by developing smaller size projects. All of them will interact with each other to resolve bigger problems.

So the problem now has moved from how to create and architect a big project to creating and keeping smaller projects well designed, and flexible enough for development and extension.

Recently, I took part in a small assignment from a company that requires me to implement an API server. I must propose a suitable architecture that is simple to understand, organizable, and maintainable.

After a while of manipulating all I have been doing at my current company, I have drafted the solution based on the boilerplate gRPC service that is ready to deploy on any platform. It may have many points to improve, but I’d love to share the project.

The project structure was inspired by the project Standard Go Project Layout. I have also added my own innovation based on the Clean Architecture principles.

Project structure

Overall, in a Golang project, we have many packages that serve many use cases. There are only a few storing the Go code implementation like cmd , configs , internal , pkg , the others are for the testing mocks, test, documenting docs and integrating: proto, pb, local development: local

  • cmd: contains all entry points of the application
  • configs: stores all the configurations of the application
  • docs: all the available documents of the application
  • internal: private packages of the project
  • pkg: public packages of the project
  • local: all the additional things that allow everyone to run your code in the local environment.
  • mocks: mocking stuff for unit testing your application
  • proto: protobuf definition for your microservices (I used it with the gRPC communication method for my company’s internal service communication, you can store another contract like jsonschema, graphQL or SOAP contract)
  • pb (maybe api): generated code from protobuf
  • test: for your microservices’ integration

Extra panacea for a production-ready microservice

For microservices that are ready to launch in the production environment, we engineers cannot only just build some runnable binary. The production environment is the chaos environment, data change velocity is extremely higher than in the experiment environment, stacked with several pit holes and obstacles lying everywhere. Those things may break your application completely.

In order to deal with those problems, we will need a way to surveil our services and take action quickly for troubleshooting. The source code must be maintainable, extendable, and meet a lot of non-functional requirements.

  • For the version control option, I suggest using git, as it’s one of the world's most obvious choices for this field. You cannot develop the project on your own. Sooner or later, you will need more people to join you. Keeping the project well-documented for transferring purposes and new members approaching, a Makefile for command line shortener (also a good guideline of how to install and run your project) is a good choice and ReadmeMD is a must.
  • For any project, unit-testing should be focused to keep the road clear and less risky than ever. Well-implemented unit-testing keeps the risk of modifying source code smaller, especially when your project size will grow at an exponential rate. I have been using golang-ci-linttool for linter (checking the source code is implementing the language’s best practice will help us focus on other aspects easier). Go test is a perfect tool and would be better in the go 1.18 when fuzzy is introduced, now you only need a mocking tool for easier project package layer segregation achieving the Clean Architect.
  • For building and deploying your microservices, the easiest solution nowadays is to ship them over containerizing technology. Your application will be kept in a much small, isolated environment so that when a service failed, your other one should not be involved. The most commonly used stack at the present is via Docker and k8s technology. Keeping the deploying process lean is the key point to minimizing the risk of rolling out and replacing services. Also, a minimized process helps your application launch the hotfixes at a higher rate, reducing the size of the incidents.

So I placed all those things in the project:

  • Dockerfile & .dockerignore: containerize your application to make it perfectly easy to be runnable at any present infrastructure.
  • .git & .gitignore: git version control is currently the best choice for managing your project.
  • .golangci.yaml: a guideline for golang-ci lint, one of the best linter that is available for Go at the moment.
  • mockery.yaml: generate your mock stuff, for unit-testing purposes.
  • Makefile: create an incredibly convenient quick command.
  • README.md: definitely the soul of your project.
  • buf.gen.yaml & buf.work.yaml: use buf.build for proto code generation.
  • tools.go: add your go build tools dependency.
  • go.mod & go.sum: go modules files

A generic architecture

In the work Clean Architecture, we have been introduced to how software should be designed. In the most typical use cases, we will need the Entities, Use Cases, and many supporting utility packages. In the microservices ecosystem, we still have the same terminologies with extended context. The domain entities and use cases, not only be included in one service but shared across services. But the design of layers inside each microservices is still the same.

In the most general use cases, I usually have the implementation of the:

  • Entities: the core domain objects of the service. A service has no purpose in existence without the business entities.
  • Use cases: cover the business aspects of interaction and behaviors of each entity.
  • Frameworks and drivers: the utilities that serve the generic case of not only the mentioned service
  • Repositories: wrap over the behavior of storing and managing the entities' state. Typically, I saw many services implemented with the wrong PoV. The repository can be another external microservices or a DB engine. A transaction handled inside the repositories should be treated as a global distributed one.
  • API: the only interface that other external services need to know to make acquaintance with the service.
  • The entry points: the place where you craft all your components into a single runnable binary serving features, like server listening and handling or

I have designed and implemented many microservices, from business model heavily related ones to staying deeply at the technical side services. Nothing I designed would use anything other than those layer classification, and following the dependency rules stated in the Clean Architecture truly make them cooperate in a lovely way.

The microservices ecosystem frees me from designing so complicated and sophisticated architects inside any tiny microservices, but to focus more on how to solve the problem at a higher view, with many replaceable moving parts forming a bigger pipeline.

Integration level

In general, each microservices represent a module that was intentionally split from the bigger picture. That module must have its own domain of business (its purpose of existence) and must have the corresponding layer and architecture (enough flexibility to be modified).

Considering the need for a method to flexibly plug and play the modules, in the most convenient, we can pretend it is the native plugged parts. gRPC has many of the greatest points to offer us at this moment.

  • A solution for handling backward and forward compatibility.
  • You can generate the gRPC client in many different languages and you can use it as a native plugged part in your code base. (the lower networking protocol was silently be done)
  • HTTP/2 is a nice future-proof protocol.
  • Well-designed principles and a wide-open community supporting source.

In the old days, I experienced many problems regarding service communication. Most of them come from the mismatch in the communication contract. The problem-solving efforts have been shown through the history of the development of JSON schema, or GraphQL. Another one is the effort cost of trying to write the best HTTP client that provides a stable communicating method (with retry, networking pool, etc…). There gRPC and protobuf keep driving those problems away in an incredibly simple manner. For any use cases not requiring handling the request immediately, you can also keep them protobuf as a way of serializing in a queuing, polling, or streaming system.

With a clear communication way, the moving parts in the ecosystem now only remain:

  • The API server: provide the interaction with the service domain object.
  • Cron jobs: guys who are going to do their jobs at some moment in the day.
  • Polling jobs: consumer or a loop-to-the-end-of-the-world job for processing application-produced data.

Conclusion

Through a hiring challenge, I have had a chance to conclude my points of view on designing backend software after years. I loved sharing my thoughts with everyone working with Go backend services. Hope I can hear from other ones and improve it.

--

--