Opinionated Kubernetes 4: Helm package manager & Ingress with nginx

If you’ve not already read them, check out Part 1 where we deploy our Kubernetes master node using Kubeadm, Part 2 where we setup cluster networking with kube-router, and Part 3 where we setup loadbalancers with MetalLB.

Introduction

In this post, we’ll cover deploying our helm, and then using it to deploy the nginx ingress controller.

Install the Helm client

Lets start with Helm. We’ll need to start by downloading the Helm client binary to our PC. Start by downloading the latest Helm v2.8.1 client:

Once you download it, extract it, move it to /usr/local/bin/helm, and chmod +x /usr/local/bin/helm. If you’re a Windows or OSX user, hopefully you know how to do the equivalent on your platform - because I don’t :) The Helm docs will help if you don’t.

Deploy Tiller, Helm’s server side component

Well - this was harder than expected. Helm doesn’t want to deploy to a master node by default, and doesn’t offer an easy way to do so. We’ve had to add a tolerations section to the Tiller pod by using the --override flag, and our knoweledge of how a Kubernetes Deployment resource looks. This is brittle, as the Tiller template can change at a moments notice and we may suddenly begin overriding instead of appending to the list of tolerations.

An alternative is to remove the “master” taint from the master node, however, I’m not planning to do this until we cover a future post where we add some dedicated worker nodes to the cluster!

First, we’ll need to create a ServiceAccount and ClusterRoleBinding for helm. Create a new file tiller-rbac.yaml with the following contents:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Then, apply to the cluster:

$ kubectl apply -f tiller-rbac.yaml 
serviceaccount "tiller" created
clusterrolebinding "tiller" created

Next, we’ll install tiller:

$ helm init --service-account tiller --override spec.template.spec.tolerations[0].key=node-role.kubernetes.io/master --override spec.template.spec.tolerations[0].effect=NoSchedule
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

We can check helm is running by looking at the pods in the kube-system namespace:

$ kubectl -n kube-system get pods
NAME                              READY     STATUS    RESTARTS   AGE
etcd-dl165g7                      1/1       Running   0          7d
kube-apiserver-dl165g7            1/1       Running   0          7d
kube-controller-manager-dl165g7   1/1       Running   0          7d
kube-dns-6f4fd4bdf-55mzp          3/3       Running   0          7d
kube-proxy-cg5g2                  1/1       Running   0          7d
kube-router-zns2d                 1/1       Running   0          5d
kube-scheduler-dl165g7            1/1       Running   0          7d
tiller-deploy-5ccb6c4cb-85l8w     0/1       Pending   0          53s

It’s still pending - that’s OK - it’s probably just downloading the tiller container image.

$ kubectl -n kube-system get pods
NAME                              READY     STATUS    RESTARTS   AGE
etcd-dl165g7                      1/1       Running   0          7d
kube-apiserver-dl165g7            1/1       Running   0          7d
kube-controller-manager-dl165g7   1/1       Running   0          7d
kube-dns-6f4fd4bdf-55mzp          3/3       Running   0          7d
kube-proxy-cg5g2                  1/1       Running   0          7d
kube-router-zns2d                 1/1       Running   0          5d
kube-scheduler-dl165g7            1/1       Running   0          7d
tiller-deploy-5ccb6c4cb-85l8w     1/1       Running   0          1m

And - it’s running, lets test it by checking the server version:

$ helm version
Client: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}

Deploy Nginx ingress using Helm

So - Helm - What is it? Helm is a package manager for Kubernetes. It allows you to deploy “charts”, a collection of templated Kubernetes resources based on the charts baked into the templates, and a set of values which provide the instance configuration. These values are how you customize the deployed instance of the chart, and you’ll almost always want to customize. So - lets start a nginx-ingress-values.yaml:

---
controller:
  tolerations:
   - key: "node-role.kubernetes.io/master"
     operator: "Exists"
     effect: "NoSchedule"

  config:
    hsts: "false"
    proxy-body-size: "0"

defaultBackend:
  tolerations:
   - key: "node-role.kubernetes.io/master"
     operator: "Exists"
     effect: "NoSchedule"

rbac:
  create: true

We’ve created a YAML file which includes the overides we want to the default values. The default values can be found here. We’ve added a toleration of running on the master node to both of the deployments this chart manages, and we’ve told the chart that yes, we have RBAC, and that it should create the necessary RBAC resources.

Lets deploy:

$ helm install stable/nginx-ingress --version 0.9.4 --name nginx-ingress --namespace kube-system --values nginx-ingress-values.yaml
NAME:   nginx-ingress
LAST DEPLOYED: Wed Feb 21 22:38:46 2018
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME                           DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
nginx-ingress-controller       1        1        1           0          1s
nginx-ingress-default-backend  1        1        1           0          1s

==> v1/Pod(related)
NAME                                            READY  STATUS             RESTARTS  AGE
nginx-ingress-controller-d7fb49479-d99f7        0/1    ContainerCreating  0         1s
nginx-ingress-default-backend-7544489c4b-kd2dk  0/1    ContainerCreating  0         1s

==> v1beta1/ClusterRole
NAME           AGE
nginx-ingress  2s

==> v1beta1/ClusterRoleBinding
NAME           AGE
nginx-ingress  2s

==> v1beta1/Role
NAME           AGE
nginx-ingress  1s

==> v1beta1/RoleBinding
NAME           AGE
nginx-ingress  1s

==> v1/Service
NAME                           TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
nginx-ingress-controller       LoadBalancer  172.31.12.225  172.31.1.0   80:32411/TCP,443:31853/TCP  1s
nginx-ingress-default-backend  ClusterIP     172.31.15.161  <none>       80/TCP                      1s

==> v1/ConfigMap
NAME                      DATA  AGE
nginx-ingress-controller  1     2s

==> v1/ServiceAccount
NAME           SECRETS  AGE
nginx-ingress  1        2s


NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace kube-system get services -o wide -w nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Helm will create a whole bunch of resources, and then (depending on the chart!) give you some useful info on how to get started. Lets ignore all that for now, and just check up on the new pods we should be running:

$ kubectl -n kube-system get pods
NAME                                             READY     STATUS    RESTARTS   AGE
etcd-dl165g7                                     1/1       Running   0          7d
kube-apiserver-dl165g7                           1/1       Running   0          7d
kube-controller-manager-dl165g7                  1/1       Running   0          7d
kube-dns-6f4fd4bdf-55mzp                         3/3       Running   0          7d
kube-proxy-cg5g2                                 1/1       Running   0          7d
kube-router-zns2d                                1/1       Running   0          6d
kube-scheduler-dl165g7                           1/1       Running   0          7d
nginx-ingress-controller-d7fb49479-d99f7         1/1       Running   0          1m
nginx-ingress-default-backend-7544489c4b-kd2dk   1/1       Running   0          1m
tiller-deploy-6446fbc7f6-jzb9w                   1/1       Running   0          2m

Great! Both the nginx-ingress-controller, and nginx-ingress-default-backend pods are up and running. What about the service?

$ kubectl -n kube-system get svc
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
kube-dns                        ClusterIP      172.31.12.10    <none>        53/UDP,53/TCP                7d
nginx-ingress-controller        LoadBalancer   172.31.12.225   172.31.1.0    80:32411/TCP,443:31853/TCP   2m
nginx-ingress-default-backend   ClusterIP      172.31.15.161   <none>        80/TCP                       2m
tiller-deploy                   ClusterIP      172.31.13.214   <none>        44134/TCP                    41m

Excellent, it’s been given an external IP of 172.31.1.0 - visting http://172.31.1.0 in a browser will give you the message default backend - 404. Ingress is now setup and running. If, like me, you got the same external-ip as you did from Part 3, make sure to hit F5 to reload the page.

Testing the nginx ingress deployment

We’ll start by deploying our nginx-deployment.yaml from Part 1 and Part 2:

$ kubectl apply -f nginx-deployment.yaml
deployment "nginx" created

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-768ccb4756-7lkvl   1/1       Running   0          9s
nginx-768ccb4756-j94db   1/1       Running   0          9s
nginx-768ccb4756-mlpzr   1/1       Running   0          9s

Next, we’ll create a ClusterIP service. We’ll need a nginx-service-cip.yaml file with:

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-service-cip
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

And we apply and check as usual:

$ kubectl apply -f nginx-service-cip.yaml 
service "nginx-service-cip" created

#$kubectl get svc
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes          ClusterIP   172.31.12.1    <none>        443/TCP   7d
nginx-service-cip   ClusterIP   172.31.13.44   <none>        80/TCP    8s

Finally, we’ll create a new Ingress resource to point at these pods. This time, we’ll create a nginx-ingress.yaml file:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  backend:
    serviceName: nginx-service-cip
    servicePort: 80

And, yet again, apply and check:

$ kubectl apply -f nginx-ingress.yaml 
ingress "nginx-ingress" created

$ kubectl get ingress
NAME            HOSTS     ADDRESS   PORTS     AGE
nginx-ingress   *                   80        10s

This is the most simple ingress resource possible, and it’s setup to grab any traffic aimed towards the ingress controller. So - visiting http://172.31.1.0/ should give you the nginx welcome page, instead of the default nginx ingress 404 page (i.e the default backend - 404 page). Hit F5 if you’re not sure you’re seeing the right thing!

Assuming you’re now seeing the default nginx welcome page, it’s time to clean up our test resources!

To clean, we’ll run these three commands:

$ kubectl delete -f nginx-ingress.yaml 
ingress "nginx-ingress" deleted

$ kubectl delete -f nginx-service-cip.yaml
service "nginx-service-cip" deleted

$ kubectl delete -f nginx-deployment.yaml
deployment "nginx" deleted

In part 5, we’re going to deploy the NFS dynamic volume provisioner.

Updated on 2018/02/25: Added the HSTS and proxy-body-size config options to nginx-ingress-values.yaml.