Why Spring is Faster Than Vert.x?

Analyzing the performance difference between the JVM frameworks through benchmarks

Alexey Soshin
Better Programming

--

Photo by Ralfs Blumbergs on Unsplash

The question “Why Spring is faster than Vert.x?” in its different variations is being asked on StackOverflow once a month on average. After all, Spring is still the most popular JVM framework by far, so lots of companies use it. But Spring Framework isn’t known for its performance. Vert.x, on the other hand, is considered as one of the top-performing JVM frameworks. So it’s expected that Vert.x would outperform Spring in any benchmark. But that’s not the case.

In this article, I’d like to address different reasons for those counterintuitive results and make a few suggestions on how to improve your benchmarking approach.

First, what do we mean when we talk about a framework or language being “fast”? In terms of web services, we don’t talk about the speed of getting the response, also known as the request latency. What we usually mean is another metric, called throughput. Latency is about how much time it takes to return a response to a single request. Throughput is about how many requests can a server process in a given timeframe. Usually: within a second.

Next, let’s understand where developers get the notion that Vert.x should be faster than Spring. There is a very popular benchmark for web frameworks powered by TechEmpowered, that attempts to measure the throughput of different languages, runtimes, and frameworks using a few scenarios. Usually, Vert.x framework performs very well in those benchmarks.

For example in the 20th round Vert.x is 10th, with 572K requests per second, while Spring is 219th with 102K requests per second. This is very impressive indeed.

But trying to reproduce those impressive results sometimes proves challenging, and hence the question from the title.

Let’s try to understand what are the main flaws with the benchmarking strategy.

While talking about Spring, I mean specifically Spring Framework, and not Spring WebFlux / Project Reactor, which operates differently. I’ll also assume that the Spring application is running within a Tomcat container.

Vert.x is I/O focused

The ingenuity of Vert.x framework was recognising early on that the bottleneck of most real-world applications is waiting for I/O. What that means is that it doesn’t matter how well your application is written, how smart the JIT optimisations are, and how bleeding-edge the JVM GC is. Most of the time your application will be waiting for a response from the database, or from a service that someone wrote in Python or PHP maybe 10 years ago.

The way Vert.x addresses that problem is that any I/O work is put in a queue. Since putting a new task in a queue is not a particularly heavy operation, Vert.x is able to process hundreds of thousands of those per second.

This is a very simplistic explanation, of course. There are multiple queues, and context switches, and reactive drivers, and a bunch of other interesting stuff that I won’t cover there. What I do want you to remember though is that Vert.x is optimised for I/O.

Now, let’s look at how Vert.x performance is usually tested:

app.get("/json").handler(ctx -> {      
ctx.response().end("Hello, World!");
});

Let’s compare the example above with the code from the Vert.x benchmark, that still performs quite well, throughput of 4M requests per second, but not fantastic compared to some other languages and frameworks:

app.get("/json").handler(ctx -> {      
ctx.response()
.putHeader(HttpHeaders.SERVER, SERVER)
.putHeader(HttpHeaders.DATE, date)
.putHeader(HttpHeaders.CONTENT_TYPE, "application/json")
.end(Json.encodeToBuffer(new Message("Hello, World!")));
}
);

Can you spot the difference? In the benchmark most developers execute, there is almost zero I/O. There is some, yes, because getting a request and writing a response is still an I/O, but not much compared to something like interacting with a database or a filesystem.

So, the advantage that a reactive framework such as Vert.x provides you is minimised by that test.

If you want to see real benefits from a reactive framework such as Vert.x, write a benchmark application that does some I/O work, such as writing to a database or reading from a remote service.

Running Benchmarks With Low Concurrency

The way Spring Framework handles concurrency is by allocating a thread pool that is dedicated to serving the incoming requests. This is also called the “thread per request” model. Once you run out of threads, the throughput of your Spring application starts to degrade.

ab -c 100 http://localhost:8080/

Here we use a tool called Apache HTTP Benchmark to bombard our service with requests. The-c flag specifies to run 100 concurrent requests at the same time.

You run this test on two services, one written in Spring, and another in Vert.x, and don’t see any difference in performance. Why is that?

Unlike Vert.x, Spring Framework doesn’t control the number of threads it uses directly. Instead, the number of threads is controlled by the container, in our case — Tomcat. The maximum number of threads Tomcat sets by default is 200. This means that until you have at least 200 concurrent requests, you shouldn’t see much difference between Spring and Vert.x application. You simply not stressing your application enough.

If you want to stress your Spring application, set the number of concurrent requests higher than the maximum size of your thread pool.

Running Benchmarks On The Same Machine

Let’s go back to how Vert.x works. I’ve already mentioned that Vert.x optimizes its performance by putting all incoming requests in a queue. Once a response arrives, it is also put on the same queue. There is a very limited number of threads, called EventLoop threads, that are busy processing that queue. The more requests you have, the busier EventLoop threads become, and the more CPU they consume.

What happens now when you run a benchmark on your machine? For example:

ab -c 1000 http://localhost:8080/

What will happen next is the following. The benchmark tool will attempt to create as many requests as it can, utilising all of the CPU resources of your machine. Vert.x service will try to serve all those requests, also attempting to utilize all of the resources.

To maximize the performance of Vert.x application during the benchmark, make sure to run it on a separate machine that doesn’t share CPU with the machines that run the benchmark.

This brings us to the next point.

Spring Framework Performance Is Fine

I’ve been a huge fan of Vert.x for the past 5 years at least. But let’s look at the throughput of Spring application in the benchmarks we’ve mentioned earlier.

  • Plaintext: 28K
  • JSON serialization: 20K
  • Single query: 14K
  • Fortunes: 6K
  • Multiple queries: 1,8K
  • Data updates: 0,8K

Looking at those numbers, and taking into account that we usually run our services in clusters of 3 instances at least, you should be asking yourself: does my application need to handle 2K updates per second?

If the answer is yes, you may need to run benchmarks from multiple machines to stress even the Spring application to the point of breaking.

Conclusion

As software engineers, we love comparing the performance of our favorite language or framework with others.

And it’s important to use objective metrics while doing so. Measuring service throughput using a benchmark is a good start, but this should be done correctly.

Evaluate if the test that you are running is CPU bound or I/O bound or has another bottleneck.

Also, make sure that you run your benchmarks on separate machines from those that run your application code. Otherwise, you may not be impressed by the results.

Finally, I’ve seen companies hitting throughput bottlenecks of their language or framework, and even helped solve some of them. But there are many successful companies around there that may not need all that throughput, and you may be working for one of those. Getting a good benchmark is hard, and takes a lot of time to get right. Think well if that’s the most critical problem you should be solving.

--

--

Solutions Architect @Depop, author of “Kotlin Design Patterns and Best Practices” book and “Pragmatic System Design” course