Kafka With Java, Spring, and Docker — Asynchronous Communication Between Microservices

An in-depth explanation of how to implement messaging between java microservices using Kafka

Pedro Luiz
Better Programming

--

Photo by Jefferson Santos on Unsplash

Table of Contents

  1. What we are going to build
  2. Kafka in a nutshell
  3. Topics
  4. Partitions
  5. Setting up projects
  6. Docker environment for Kafka
  7. Producer Microservice
  8. Consumer Microservice
  9. Advanced features
  10. Conclusion

In this article, we will be discussing how Kafka as a message broker works and how to use it to communicate between microservices, by developing two spring microservices.

1. What we are going to build

The idea is to create a Producer Microservice that receives Food Orders to be created and pass them along to the Consumer Microservice through Kafka to be persisted in database.

2. Kafka in a nutshell

A Kafka cluster is highly scalable and fault-tolerant, meaning that if any of its servers fails, the other servers will take over their work to ensure continuous operations without any data loss.

An event records the fact that something happened, carrying a message, that can be pretty much anything, for example, a string, an array or a JSON object. When you read or write data to Kafka, you do this in the form of those events.

Producers are those that publish (write) events to Kafka, and consumers are those that subscribe (read and process) these events.

3. Topics

Events are organized and durably stored in topics. A topic is similar to a folder, and these events are the files in that folder. Topics are multi-producer and multi-subscriber, meaning that we can have zero, one or many of them both.

Events can be read as many times as needed, unlike traditional messaging systems, events are not deleted after consumption. Instead, you can define for how long Kafka should retain those events.

4. Partitions

Topics are partitioned, meaning that a topic is spread over a number of buckets. When a new event is published to a topic, it is actually appended to one of the topic’s partitions. Events with the same event key are written to the same partition. Kafka guarantees that any consumer of a given topic-partition will always read that partition’s events in the exact same order as they were written.

To make your data fault-tolerant and high-available, every topic can be replicated, even across regions or data centers, so that there are always multiple brokers that have a copy of the data just in case things go wrong (they will).

5. Setting Up Projects

Go to start.spring.io and create the projects with the following dependencies

Producer Microservice:

Consumer Microservice

6. Docker Environment for Kafka

On the root of one of the project, it doesn’t matter which one (or both), create a docker-compose.yml file, containing the required configurations to run Kafka, Kafdrop, and Zookeeper in Docker containers.

Being in the root folder of one of the projects, you can run in the terminal docker-compose up. You can access Kafdrop, which is a web interface for managing Kafka, in http://localhost:9000.

There you can see your topics, create them, delete them, and many more.

7. Producer Microservice

Architecture:

Steps

  • Create configuration beans
  • Create a Food Order topic
  • Create Food Order Controller, Service, and Producer
  • Convert orders into messages in a string format to send to the broker

Environment variables and port to our API to run:

ConfigResponsible for creating the KafkaTemplate bean, which will be used to send the message, and creating the food order topic.

Here’s the model class for FoodOrder:

FoodOrderControllerResponsible for receiving a food order request, and passing it along to the service layer.

FoodOrderServiceResponsible for receiving the food order and passing it along to the producer.

ProducerResponsible for receiving the food order and publishing it as a message to Kafka.

In line 18 we convert the FoodOrder object into a string in JSON format, so it can be received as a string in the consumer microservice.

In line 19 we actually send the message, passing the topic in which to publish (referred in line 6 as the environment variable) and the order as a message.

When running the application, we should be able to see the topic created in Kafdrop. And when sending a food order, we should be able to see in the logs that the message was sent.

Now if under the Topics section in Kafdrop we access the t.food.order topic created, we should be able to see the message.

8. Consumer Microservice

Architecture:

Steps

  • Create a configuration for beans and group-id
  • Create database access
  • Create Food Order Consumer and Service
  • Create a Food Access Repository

We will start configuring the port for our API to run, the topic to listen to, a group-id for our consumer, and the database configurations

ConfigResponsible for configuring the ModelMapper bean which is a library used for mapping an object to another, when using the DTO pattern for example, that we will make use of here

Here are the Model classes:

ConsumerResponsible for listening to the food order topic and when any message is published to it, consume it. We will convert the listened messages to a FoodOrderDto object that doesn’t contain everything related to the entity that will be persisted, like the ID.

FoodOrderServiceResponsible for receiving the consumed order into a FoodOrder object to be and passing it along to the persistence layer to be persisted.

The code for the FoodOrderRepository is:

Now only by running the Consumer Microservice, the already published messages will be consumed from the order topic

And an important detail to notice here is that if we go to Kafdrop and check the message that we just consumed, it will still be there. And that is something that wouldn’t happen with RabbitMQ, for example.

9. Advanced features

We can send scheduled messages, by making use of Scheduling.

Enable it by adding the @EnableScheduling annotation in the configuration class in the Producer Microservice.

Scheduler is responsible for sending the messages at a certain rate, we will be sending them at a fixed rate of 1000 milliseconds.

The topic will be created automatically, but we could define the bean like we defined previously.

The output would be

10. Conclusion

The main idea here was to make an introduction to using Kafka with Java and Spring, so you can implement this solution inside a much more complex system.

In case this article helped you in any way, consider giving it a clap, following me and sharing it.

The project on GitHub can be found here. Consider giving it a star!

References

  1. Apache Kafka Documentation
  2. Kafka The Definitive Guide, O’Reilly
  3. Apache Kafka, Matthias J. Sax

--

--

Computer Science and Computer Engineer grad, working as a fullstack developer, mostly with Java and Angular. Passionate about Software Architecture and Devops.