Build a Federation of Multiple Kubernetes Clusters With Kubefed V2

A step-by-step guide to building a Kubernetes federation for managing multiple regions’ clusters with KubeFed

Andrea Wang
Better Programming

--

Image courtesy of the author

What Is KubeFed?

KubeFed (Kubernetes Cluster Federation) allows you to use a single Kubernetes cluster to coordinate multiple Kubernetes clusters. It can deploy multiple-cluster applications in different regions and design for disaster recovery.

To learn more about KubeFed: https://github.com/kubernetes-sigs/kubefed

Prerequisites

Kubernetes clusters must be up and running: kubernetes v1.13+.

In this article, we’ll have three Kubernetes clusters. One is for installing Federation Control Plane as the host cluster (named lab). The others are for deploying applications named lab-a and lab-b.

Kubernetes Cluster Preparation

KubeFed CLI Installation

kubefedctl is the KubeFed command-line utility. Now it supports Linux and OSX only. In the host cluster, you can check the following link to get the released version and run the command for installation: https://github.com/kubernetes-sigs/kubefed/releases

#Replace KUBEFEDVERSION, OSTYPEVERSION=KUBEFEDVERSION <latest-version, e.g. 0.1.0-rc3>
OS=OSTYPE <darwin/linux>
ARCH=amd64
curl -LO https://github.com/kubernetes-sigs/kubefed/releases/download/v${VERSION}/kubefedctl-${VERSION}-${OS}-${ARCH}.tgz
tar -zxvf kubefedctl-*.tgz
chmod u+x kubefedctl
sudo mv kubefedctl /usr/local/bin/ #make sure the location is in the PATH

You can check your kubefedctl version via:

kubefedctl version
Kubefedctl version

KubeFed Installation

KubeFed installation uses Helm chart for deployment. In the host cluster, you can use the following command to install Helm CLI Helm v2.10+:

curl -LO https://git.io/get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

Then install Helm Tiller on the Kubernetes cluster:

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: helm
name: tiller
template:
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: kube-system
- name: TILLER_HISTORY_MAX
value: '0'
image: 'gcr.io/kubernetes-helm/tiller:v2.16.9'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /liveness
port: 44135
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: tiller
ports:
- containerPort: 44134
name: tiller
protocol: TCP
- containerPort: 44135
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: 44135
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: tiller
serviceAccountName: tiller
terminationGracePeriodSeconds: 30
EOF

What Is Helm?

Helm is the package manager for Kubernetes. It can help you to manage your Kubernetes applications for easy installation and upgrading via Helm Charts.

To learn more about Helm: https://helm.sh/

Run the following command to check the KubeFed chart version:

helm init --service-account tillerhelm repo add kubefed-charts https://raw.githubusercontent.com/kubernetes-sigs/kubefed/master/chartshelm search kubefed
Kubefed Chart Version

Install KubeFed v0.3.0 in kube-federation-system namespace (default) with the following command:

helm install  kubefed-charts/kubefed  --name=kubefed  --version=0.3.0 --namespace kube-federation-system --devel --debug# Check if kubefed is ready
kubectl get pod -n kube-federation-system
Kubefed Installation Status

Cluster Registration

In the host cluster, set up Kubectl config for lab-a and lab-b, so we’ll be able to access those clusters via a context switch and use the context to join the federation:

#Replace CLUSTERNAME, CLUSTERIP, USERNAME, TOKEN, CONTEXTNAMEkubectl config set-cluster CLUSTERNAME --server=CLUSTERIP
kubectl config set-credentials USERNAME --token="TOKEN"
kubectl config set-context CONTEXTNAME --cluster=CLUSTERNAME --user=USERNAME

Check the contexts for all clusters:

kubectl config get-contexts
Kubernetes contexts

Use kubefedctl join to register clusters into the host cluster:

#Replace JOINED_CLUSTER_NAME, HOST_CLUSTER_NAME, HOST_CLUSTER_CONTEXT, JOINED_CLUSTER_CONTEXTkubefedctl join JOINED_CLUSTER_NAME --host-cluster-name=HOST_CLUSTER_NAME --host-cluster-context=HOST_CLUSTER_CONTEXT --cluster-context=JOINED_CLUSTER_CONTEXT# --kubefed-namespace string
If you install kubefed into specific namespace instead of default (default "kube-federation-system") You should provide the specific namespace in the host cluster where the KubeFed system components are installed
#example.
kubefedctl join lab-a-us-west-2 --host-cluster-name=lab --host-cluster-context=eks-admin@lab.us-west-2.eksctl.io --cluster-context=eks-admin@lab-a.us-west-2.eksctl.io

After you’ve joined clusters, you can check the status via the below command:

kubectl -n kube-federation-system get kubefedclusters
Kubefed Joined Clusters

Your federation clusters are ready now.

Deploy a Service

Note that you have to create a namespace in the host cluster first and then federate to the joined clusters.

In the host cluster, use the following command to create the namespaces and nginx deployment for the host cluster and other clusters. You may change the clusters to the cluster name you use when joining.

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: test-namespace
EOF
cat << EOF | kubectl apply -f -
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
name: test-namespace
namespace: test-namespace
spec:
placement:
clusters:
- name: lab-a-us-west-2
- name: lab-b-ap-northeast-1
EOF
cat << EOF | kubectl apply -f -
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
name: test-deployment
namespace: test-namespace
spec:
template:
metadata:
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
placement:
clusters:
- name: lab-a-us-west-2
- name: lab-b-ap-northeast-1
EOF

After deployment, you will be able to see the nginx deployments are up and running in both clusters.

#Check for lab-akubectl get deployment -n test-namespace --context eks-admin@lab-a.us-west-2.eksctl.io  #Check for lab-bkubectl get deployment -n test-namespace --context eks-admin@lab-b.ap-northeast-1.eksctl.io

You can also override application deployment version, etc., for specific clusters only via defining overrides in the YAML file :

cat << EOF | kubectl apply -f -
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
name: test-deployment
namespace: test-namespace
spec:
template:
metadata:
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
placement:
clusters:
- name: lab-a-us-west-2
- name: lab-b-ap-northeast-1
overrides:
- clusterName: lab-a-us-west-2
clusterOverrides:
- path: "/spec/replicas"
value: 5
- path: "/spec/template/spec/containers/0/image"
value: "nginx:1.17.0-alpine"
- path: "/metadata/annotations"
op: "add"
value:
foo: bar
- path: "/metadata/annotations/foo"
op: "remove"
EOF

After deployment, you’ll be able to see that the nginx deployment’s replicas, image version, etc., in lab-a are now modified.

kubectl describe deployment -n test-namespace --context eks-admin@lab-a.us-west-2.eksctl.io

That’s all for the application deployment testing.

Now you’ll be able to use the federation to manage your clusters and application!

My Working Version

--

--