Http Server Performance: NodeJS vs. Go
Who delivers the higher number of concurrent requests?
We are developing something like an ad proxy or Google Ad Buffer. Service just forward ad HTTP requests to SSPs server. For this purpose, it’s necessary to create many HTTP requests with minimum hardware resources. Therefore we decided to do research and compare programming languages with virtual machines and compiled one.
We are pretty familiar with NodeJS and JavaScript technology. Therefore we started testing HTTP connection with the V8 engine. Of course, we didn’t start from scratch, we used the fastify package. It’s actually based on the NodeJS HTTP package. So at the bottom of the software stack is a compiled low level HTTP server. But anyway there is a tiny layer running under V8. Question is how this layer slows execution.
NodeJS
The script is pretty straightforward.
const fastify = require(“fastify”)({
logger: false,
});fastify.get(“/fillbuffer”, async (request, reply) => {
reply.type(“application/json”).code(200);
return {
result: `{result: “Hello world”}`,
};
});fastify.listen(3008, (err, address) => {
if (err) throw err;
});
I used the ApacheBench (ab) tool for testing. Let’s skip the full hardware specification. I can just tell that I used an I7–8550U CPU.
ab -n 1000000 -c 100 localhost:3008/fillbufferRequests per second: 12925.33 [#/sec] (mean)
Time per request: 7.737 [ms] (mean)
Time per request: 0.077 [ms] (mean, across all concurrent requests)Percentage of the requests served within a certain time (ms)
50% 8
66% 8
75% 8
80% 8
90% 9
95% 10
98% 12
99% 13
100% 106 (longest request)
Let’s try more concurrent connections.
ab -n 1000000 -c 500 localhost:3008/fillbufferResults:
Requests per second: 9673.37 [#/sec] (mean)
Time per request: 51.688 [ms] (mean)
Time per request: 0.103 [ms] (mean, across all concurrent requestPercentage of the requests served within a certain time (ms)
50% 48
66% 49
75% 50
80% 51
90% 58
95% 79
98% 137
99% 156
100% 286 (longest request)
So far so good. 500 concurrent connection hit CPU limit and Node solution starts struggling but let’s get Go numbers.
Go
The script is a bit longer but still short.
package mainimport (
“encoding/json”
“fmt”
“log”
“github.com/valyala/fasthttp”
)var (
addr = “:3008”
strContentType = []byte(“Content-Type”)
strApplicationJSON = []byte(“application/json”)
httpClient *fasthttp.Client
)func main() {
fmt.Println(“Starting server…”)
h := requestHandler
h = fasthttp.CompressHandler(h)httpClient = &fasthttp.Client{
MaxConnsPerHost: 2048,
}if err := fasthttp.ListenAndServe(addr, h); err != nil {
log.Fatalf(“Error in ListenAndServe: %s”, err)
}
}func requestHandler(ctx *fasthttp.RequestCtx) {
if string(ctx.Method()) == “GET” {
switch string(ctx.Path()) {
case “/fillbuffer”:
ctx.Response.Header.SetCanonical(strContentType, strApplicationJSON)
ctx.Response.SetStatusCode(200)
response := map[string]string{“result”: fmt.Sprintf(“hello world”)}
if err := json.NewEncoder(ctx).Encode(response); err != nil {
log.Fatal(err)
}
}
}
}
As you can see I decided to use fasthttp
as the HTTP server. The server is not based on any HTTP lib. So it’s really pure HTTP protocol implementation. Let’s see the result for 100 concurrent requests.
ab -n 1000000 -c 100 localhost:3008/fillbufferRequests per second: 15847.80 [#/sec] (mean)
Time per request: 6.310 [ms] (mean)
Time per request: 0.063 [ms] (mean, across all concurrent requests)Percentage of the requests served within a certain time (ms)
50% 6
66% 7
75% 7
80% 7
90% 7
95% 7
98% 8
99% 8
100% 18 (longest request)
Well, numbers are really great agains the NodeJS solution. Especially requests served within a certain time. It’s almost flat. Let’s start the final test.
ab -n 1000000 -c 500 localhost:3008/fillbufferRequests per second: 14682.27 [#/sec] (mean)
Time per request: 34.055 [ms] (mean)
Time per request: 0.068 [ms] (mean, across all concurrent requests)Percentage of the requests served within a certain time (ms)
50% 34
66% 36
75% 37
80% 37
90% 39
95% 40
98% 41
99% 41
100% 62 (longest request)
Conclusion
As you can see, the Go solution serving time is still flat. It looks like there is still space to get more concurrent requests but let’s just compare basic numbers.
Go is the only winner here especially with the higher number of concurrent requests.
So tiny layer running under the V8 engine is not so tiny. With 100 concurrent requests, deliver Go over 18% more requests. 500 concurrent requests increased gain to over 34%.