Member-only story
K8s: A Closer Look at Kube-Proxy
An example showing how kube-proxy plays with iptables

The Kubernetes network proxy (aka kube-proxy) is a daemon running on each node. It basically reflects the services defined in the cluster and manages the rules to load-balance requests to a service’s backend pods.

Quick example: Let’s say we have several pods of an API microservice running in our cluster, with those replicas being exposed by a service. When a request reaches the service virtual IP, how is the request forwarded to one of the underlying pods? Well… simply by using the rules that kube-proxy created. OK, it’s not that simple under the hood, but we get the big picture here.
kube-proxy can run in three different modes:
- iptables (default mode)
- ipvs
- userspace (“legacy” mode, not recommended anymore)
While the iptables mode is totally fine for many clusters and workloads, ipvs can be useful when the number of services is important (more than 1,000). Indeed, as iptables rules are read sequentially, its usage can impact the routing performances if many services exist in the cluster.
Tigera (the creator and maintainer of the Calico networking solution) details the difference between the iptables and ipvs mode in this great article. It also provides a high-level comparison between those two modes.

In this article, we will focus on the iptables mode (an upcoming article will be dedicated to ipvs mode) and thus illustrate how kube-proxy defines iptables rules.
For that purpose, we will use a two-node cluster that I’ve just created using kubeadm
:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready control-plane,master 57s v1.20.0
k8s-2 Ready <none> 41s v1.20.0