Opinionated Kubernetes 2: Cluster networking with kube-router

If you’ve not already read it, check out Part 1 where we deploy our Kubernetes master node using Kubeadm.

Introduction

In this post, we’ll cover deploying our cluster network of choice - kube-router.

There are many choices for Kubernetes cluster networking, I’ve chosen kube-router for a couple of reasons:

  • The project logo is awesome
  • Purpose built for Kubernetes - no extra fluff, no compatibility weirdness, hopefully just works
  • No external datastore
  • Implements a Kubernetes NetworkPolicy controller
  • Supports BGP
  • Provides an optional IPVS based replacement for kube-proxy

Lets get to it

Most Kubernetes networking implementations will run as a DaemonSet within your cluster - this means that each node in your cluster will run a copy of this pod, and this pod will be responsible for implementing for Kubernetes - and that’s exactly how kube-router runs. Lets go ahead and deploy using the the default configuration - with cluster networking, with network policy, without the kube-proxy replacement:

$ kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/336989088a24c0fd483db0a28a3d0b14129a360e/daemonset/kubeadm-kuberouter.yaml
configmap "kube-router-cfg" created
daemonset "kube-router" created
serviceaccount "kube-router" created
clusterrole "kube-router" created
clusterrolebinding "kube-router" created

And, now we wait for the new kube-router pod to become ready:

$ kubectl -n kube-system get pods
NAME                              READY     STATUS     RESTARTS   AGE
etcd-dl165g7                      1/1       Running    0          1d
kube-apiserver-dl165g7            1/1       Running    0          1d
kube-controller-manager-dl165g7   1/1       Running    0          1d
kube-dns-6f4fd4bdf-55mzp          0/3       Pending    0          1d
kube-proxy-cg5g2                  1/1       Running    0          1d
kube-router-zns2d                 0/1       Init:0/1   0          6s
kube-scheduler-dl165g7            1/1       Running    0          1d

$ kubectl -n kube-system get pods
NAME                              READY     STATUS    RESTARTS   AGE
etcd-dl165g7                      1/1       Running   0          1d
kube-apiserver-dl165g7            1/1       Running   0          1d
kube-controller-manager-dl165g7   1/1       Running   0          1d
kube-dns-6f4fd4bdf-55mzp          0/3       Pending   0          1d
kube-proxy-cg5g2                  1/1       Running   0          1d
kube-router-zns2d                 1/1       Running   0          38s
kube-scheduler-dl165g7            1/1       Running   0          1d

Okay - It’s up and running! At this point, kube-dns will still show a status of “Pending” and a container readiness of “0/3”. Give it a few more seconds and Kubernetes should launch the kube-dns pods:

$ kubectl -n kube-system get pods
NAME                              READY     STATUS    RESTARTS   AGE
etcd-dl165g7                      1/1       Running   0          1d
kube-apiserver-dl165g7            1/1       Running   0          1d
kube-controller-manager-dl165g7   1/1       Running   0          1d
kube-dns-6f4fd4bdf-55mzp          3/3       Running   0          1d
kube-proxy-cg5g2                  1/1       Running   0          1d
kube-router-zns2d                 1/1       Running   0          1m
kube-scheduler-dl165g7            1/1       Running   0          1d

With that - we have our cluster network deployed. In future posts, we’re going to build on this configuration - by adding BGP peering, the replacement for kube-proxy, etc. For now though, this is all we’re going need.

Testing the deployment

Now that we have a master node and a cluster networking implementation deployed, we can test things out and make sure it’s all really working. To do this, we’ll deploy a pod running web server into Kubernetes, and expose the pod using a NodePort service.

NodePort services are effectivly the lowest common denominator when it comes it providing access to pods from outside kubernetes. Each NodePort service will be allocated a port in the 30000-32767 range (by default). Each and every node within your cluster will open that port, and when a connection arrives, forward it to the correct node, and then into a pod. This has some obvious limitations - for example port 80, used for HTTP, is unavailable. And - even if it was, only one service in the cluster could claim that port. Obviously, this is not how things should be! We’ll be using Google’s MetalLB later to solve this, but for now, lets roll with a NodePort.

Lets start by create the Deployment YAML file - nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
          operator: Exists
      containers:
        - name: nginx
          image: nginx:1.7.9
          ports:
            - containerPort: 80

We’ll then apply this Deployment to Kubernetes, and wait for them to become ready:

$ kubectl apply -f nginx-deployment.yaml
deployment "nginx" created

$ kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP            NODE
nginx-768ccb4756-msk7r   1/1       Running   0          2m        172.31.16.4   dl165g7
nginx-768ccb4756-mttcv   1/1       Running   0          2m        172.31.16.5   dl165g7
nginx-768ccb4756-xgk6j   1/1       Running   0          2m        172.31.16.3   dl165g7

Here we can see 3 pods have been started, each with their own IP address in our pod network range. Lets create the Service now.

We’ll need another YAML file for the service nginx-service.yaml:

kind: Service
apiVersion: v1
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Again, we’ll apply this YAML, and wait for it to become ready:

$ kubectl apply -f nginx-service.yaml 
service "nginx-service" created

$ kubectl get service
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   172.31.12.1     <none>        443/TCP        1d
nginx-service   NodePort    172.31.12.100   <none>        80:30916/TCP   26s

We can see our service has been allocated a cluster IP within our services network, and that it’s been allocated a NodePort of 30916. If everything worked out like it should have, your kubernetes nodes (and we currently only have 1 - the master) should now be running a HTTP server on http://{master-node-ip}:30916/ - lets check it out:

Welcome to nginx

Through either horribly bad luck, or amazingly good luck, the nginx Service was allocated an IP that looks astonishing like my master nodes IP. My master node is 172.31.0.100, my nginx Service IP is 172.31.12.100 - my browser is pointed to the IP of my master node, not the IP of the Service - This is how NodePort services work, MetalLB will change this later once we deploy it.

So - We can see the default webpage for the nginx container - this means our network implementation has been deployed successfully, and it’s time to clean up our test resources and and move onto deploying MetalLB!

To clean, we’ll run these two commands:

$ kubectl delete -f nginx-service.yaml 
service "nginx-service" deleted

$ kubectl delete -f nginx-deployment.yaml 
deployment "nginx" deleted

In part 3, we’ll deploy MetalLB to Kubernetes, avoiding the need to use the clumsy NodePort services.. Tomorrow :)