Performance Impact of Maps Compared to Slices in Go 1.18
Does using a custom key affect performance?

As I mentioned in part one, you can use fixed-size as well as dynamic-size datasets. The last time we compared the performance difference between arrays and slices. The result was quite stunning. If you haven’t read it yet, I recommend reading it first. However, it’s not mandatory for this article. Today, we’re going to look at maps. Or, to be more specific, we will examine whether using a custom key affects performance compared to slices.
Intro
In the first part, we formulated our main questions. We have already answered the first one in the previous article. But, the second is still open.
- ̶S̶̶̶l̶̶̶i̶̶̶c̶̶̶e̶̶̶s̶̶̶ ̶̶̶v̶̶̶s̶̶̶.̶̶̶ ̶̶̶A̶̶̶r̶̶̶r̶̶̶a̶̶̶y̶̶̶s̶̶̶.̶̶̶ ̶̶̶W̶̶̶h̶̶̶i̶̶̶c̶̶̶h̶̶̶ ̶̶̶p̶e̶r̶f̶o̶r̶m̶s̶ ̶̶̶b̶e̶t̶t̶e̶r̶?̶̶̶
- Slices vs. Maps! Does using a custom key affect the performance?
Maps vs. Slices
I created two benchmark tests for each case to get a more detailed view. One to append values and one to retrieve some of them. What we’re doing now is quite similar to the first part. We run the benchmark tests, save the results to a file and compare them with benchstat. Below you can see how they are structured.
Map benchmark test
Slice benchmark test
Enriching data collection: Therefore, we have two functions. First, I append many numbers to my map/slice depending on the number I pass into the function. The second is my actual benchmark test. Here I call the first function and test how long it takes for our map/slice to be populated with all the numbers.
Retrieving values from the data collection: In our second benchmark test, we want to retrieve values from our slice/map. Therefore, as you can see above, we have an array that contains all the values we want to retrieve. Now we iterate over this array and measure the time it takes to retrieve the values from the two different datasets. In the case of our slice, we iterate over our dataset until we find our value. In the case of our map, we only need to access our value through the specific key.
Comparison
Now comes one of the most exciting parts. We run, store, and compare the benchmarks. In the previous article, we saw remarkable results between arrays vs. slices. But will it be the same this time? We will see. First, we need to run our benchmark tests. For this, Go provides us with a pretty simple out-of-the-box command.
go test -bench="BenchmarkName" -run=^# -count=x | tee filename.txt
Enrich datasets
After we have stored our results, we can compare them with each other. We start with our benchmarks, where we append new values.
benchstat slice.txt map.txt
You probably already expected that the map would be much slower than the slice version. But were you aware of the difference? As you can see below. For small datasets, it’s relatively tiny. But with bigger numbers, the gap increases rapidly.
In our case, the difference is at the peak of around 1630%. To be sure, I ran the tests several times, and the results were nearly identical.

Retrieve values from datasets
Now it’s time for our second benchmark test. Remember, in this case, we are retrieving values from our Map/Slice. What we see below is probably what most of you have already expected. Our map has a complexity of O(1), while our slice has a complexity of O(n). This means that the map, in this case, always takes the same amount of time, while the time to retrieve a value from our slice depends on the number of numbers that need to be iterated first.

Takeaway
I know. Often it’s just necessary or cleaner to use maps or slices. But if you have a choice, be aware of the performance difference. Generally speaking: “The larger your dataset, the bigger the performance difference when adding or retrieving new values.”
Final Thoughts
This was the second part of “Fixed vs. Dynamic Sized Data Collections.” I hope the comparison was interesting and you learned something new. If you have anything to mention or any questions, it would be great if you leave them in the comments. See you soon.
P.S. This article is part of my current series. Over the next time, I’ll look at various generic helper functions, interesting benchmarks, and useful features.
If you’re as excited as I am, stay tuned!