Opinionated Kubernetes 1: Intro & Deploy with Kubeadm


This is first part in what is likely be a long series of posts. In this series, I’ll walk through deployment of Kubernetes on bare metal, using all available functionality. This is my “personal cluster” - I know, I know - totally unnecessary. I don’t care :)

I’ll continue posting as I add new features, upgrade, expand, etc - building up a set of hopefully coherant documentation for both myself, and anyone else who finds it useful.

This post will serve as the groundwork, building the bare minimun needed to get to the interesting stuff!


So - What are we going to build? I won’t go into detail on my choices in this post, but I will list them:

  • Bare Metal
  • Ubuntu 16.04
  • Kubernetes 1.9
  • Kubeadm for deployment
  • kube-router for networking
  • nfs-provisioner for dynamic volumes
  • MetalLB to provide Service Type=LoadBalancer support
  • Dex for authentication via OpenID Connect
  • No compromises - all features available on Bare Metal

Basic Requirements

  • 1 or more x86_64 servers running clean Ubuntu 16.04
  • All servers have updates applied
  • A block of IPv4 addresses for Pods
  • A block of IPv4 addresses for Services
  • A block of IPv4 addresses for LoadBalancer services

Installing the master node

You might notice I just said “node”, rather than “nodes”. That’s because as of Kubernetes 1.9, kubeadm does not yet support deployment of highly available clusters! This is is pretty big gap in kubeadm, however it’s both being worked on upstream, and in the end, it really doesn’t matter given for my use case, as I only have 1 server in my house!

This section is going to be brief, as Kubernetes has some pretty good docs on using kubeadm to deploy a cluster. What I’ll cover here is both the exact commands I use, as well as any differences from the upstream documents, needed to deploy with my set of choices.

Installing Docker

First up, we need to install the Docker container engine. We’ll use the packages provided by Ubuntu, rather than the packages distributed by Docker Corp, as the latest versions of the docker engine is known to not work very well with Kubernetes. We frankly don’t care about Docker anyway. We’ll be using Kubernetes, and don’t care what’s underneath.

$ apt-get update
$ apt-get install docker.io aufs-tools

Installing Kubeadm, Kubelet, Kubectl

Next, we’ll need to install the Kubeadm. Ubuntu does not currently include Kubernetes within 16.04, so we’ll get our packages from Google.

First, we’ll need to install the Google apt repository:

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

Then, we’ll install kubeadm, kubelet, and kubectl:

$ apt-get update
$ apt-get install kubeadm kubelet kubectl

Deployment of the Kubernetes master services

Installing the master services with kubeadm couldn’t be simplier. All we need to do is give it the network CIDRs for our service and pod networks.

I’ve chosen a /22 services network - a network with 1024 IP addresses, and a /20 pod network - a network with 4096 addresses - which should be plenty for my home cluster.

I’ve made sure to choose networks which are unused elsewhere in my house, to make sure pods can connect to other things attached to the network in my house.

$ kubeadm init \
    --service-cidr \

We’ll get some output indicating either success, or failure:

[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [dl165g7 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs []
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 98.003879 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node dl165g7 as master by adding a label and a taint
[markmaster] Master dl165g7 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 13cc29.b0bdd145616cb04a
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 13cc29.b0bdd145616cb04a --discovery-token-ca-cert-hash sha256:cb942654bc2087a1cda93ca87d941a5f42c635a738113bc064418dd69b79098c

Lets copy the .kube/config file into place - we’ll run the first few commands recommended in the output above using your normal, non-root account:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

And - Finally for todays post - validate it’s working.

$ kubectl cluster-info
Kubernetes master is running at
KubeDNS is running at

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}  

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-dl165g7                      1/1       Running   0          6m
kube-system   kube-apiserver-dl165g7            1/1       Running   0          5m
kube-system   kube-controller-manager-dl165g7   1/1       Running   0          6m
kube-system   kube-dns-6f4fd4bdf-55mzp          0/3       Pending   0          6m
kube-system   kube-proxy-cg5g2                  1/1       Running   0          6m
kube-system   kube-scheduler-dl165g7            1/1       Running   0          6m

As we can we see, the Kubernetes API is responding, all the Kubernetes services are running + healthy. Don’t worry about kube-dns just yet, once we deploy our pod network - those pods will boot right up.


We’ve now prepared our master node - all the kubernetes “master” services are running, and we’re ready to deploy our pod network. We even have a workload (kube-dns) scheduled to it! However, it’s not running yet, as it’s waiting for an available node with a working pod network :)

Next up - we’ll deploy our pod network. However, that’s tomorrows post :)