In this post, we’ll cover deploying our LoadBalancer implementation of choice - MetalLB. Not all Kubernetes clusters need to make this choice, for example, if you’re running in a public cloud, you will more than likely want to use that public clouds loadbalancer service instead of rolling your own. However, back in Part 1, I mentioned two of my choices - “Bare Metal”, and “No compromises - all features available on Bare Metal”. As such, we must support
Type=LoadBalancer has generally been difficult on Bare Metal, as the Kubernetes implementation supported only cloud providers, and when running on your own metal - you usually don’t have one of those. Things have gotton better, and projects implementing support for these
Type=LoadBalancer services have started springing up. I’ve personally tried two of them to date:
- MetalLB: This is what we’re deploying today.
- Keepalived cloud provider: This is an external cloud-provider-interface implementation that uses keepalived to move VIPs around your Kubernetes nodes.
I’ve chosen MetalLB because:
- The project is shepherded by Google.
- I’ve used Keepalived CPI before, and want to try something new
- Keepalived CPI requires exposing /dev and /lib/modules into a container
- Keepalived CPI requires running a privileged container
- Supports BGP
Lets get to it
MetalLB will need two components, both of which will be deployed into Kubernetes as ordinary workloads. The first component is the “controller” - this will handle IP assignments and generally interfacing with Kubernetes. The second is the “speaker” - this will advertise our LB IPs from our Kubernetes nodes, allowing traffic to actually reach the right service. This will be done by either BGP, or ARP.
Today, we’re going to use ARP mode. Given both MetalLB and kube-router have BGP support, and they don’t (yet) integrate nicely with each other, we’re avoiding the BGP topic until both upstreams figure something out.
So - Lets deploy the MetalLB controller and speaker
DaemonSets, and the related resources necessary:
$ kubectl apply -f https://gist.githubusercontent.com/kiall/7e3aae1bcd2de72f7e1e4b89cf16d5a9/raw/acab6c611cc192600c5be0afa06e4d3d0d297fc4/metallb.yaml namespace "metallb-system" created clusterrole "metallb-system:controller" created clusterrole "metallb-system:speaker" created role "leader-election" created role "config-watcher" created serviceaccount "controller" created serviceaccount "speaker" created clusterrolebinding "metallb-system:controller" created clusterrolebinding "metallb-system:speaker" created rolebinding "config-watcher" created rolebinding "leader-election" created deployment "controller" created daemonset "speaker" created
We’ve deplopyed using a “fork” of the upstream MetalLB YAML, where the only addition is to add a
tolerations section to both the
Deployment and the
DaemonSet to tolerate being deployed onto a master node.
Then we’ll wail a minute or so for container images to be pulled, and we’ll check on the status of the new pods:
$ kubectl -n metallb-system get pods NAME READY STATUS RESTARTS AGE controller-8689f5cc4-8wj67 1/1 Running 0 3m speaker-9zb2h 1/1 Running 0 2m
And - both the controller and speaker pods are ready. Good. Next, we’ll configure the service by creating the necessary configmap. We’ll save this into a file called metallb-config.yaml:
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: lan-ip-space protocol: arp arp-network: 172.31.0.0/23 cidr: - 172.31.1.0/24
This example uses my LAN network CIDRs. You will need to substitute your own. In this example,
arp-network: 172.31.0.0/23 is my LAN network and
172.31.1.0/24 is the subset of that which I have reserved for MetalLB to allocate addresses from.
We’ll then apply this
ConfigMap to Kubernetes:
$ kubectl apply -f metallb-config.yaml configmap "config" created
Testing the deployment
Just as in Part 2, we’re going to deploy a pod running web server into Kubernetes, however, rather than
NodePort service, we’re going to use a
Lets start by create the
Deployment YAML file -
apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule operator: Exists containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
We’ll then apply this
Deployment to Kubernetes, and wait for them to become ready:
$ kubectl apply -f nginx-deployment.yaml deployment "nginx" created $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-768ccb4756-f9wvs 1/1 Running 0 37s 172.31.16.8 dl165g7 nginx-768ccb4756-nxzwj 1/1 Running 0 37s 172.31.16.7 dl165g7 nginx-768ccb4756-nz9tn 1/1 Running 0 37s 172.31.16.9 dl165g7
Here we can see 3 pods have been started, each with their own IP address in our pod network range. Lets create the
We’ll need another YAML file for the service
kind: Service apiVersion: v1 metadata: name: nginx-service-lb spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80
Again, we’ll apply this YAML, and wait for it to become ready:
$ kubectl apply -f nginx-service-lb.yaml service "nginx-service-lb" created $ kubectl get service kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 172.31.12.1 <none> 443/TCP 6d <none> nginx-service-lb LoadBalancer 172.31.12.240 172.31.1.0 80:32349/TCP 5s app=nginx
We can see our service has been allocated a cluster IP within our services network, that it’s been allocated a NodePort of
32349, and that it has an external IP of
172.31.1.0 allocated by MetalLB. If everything worked our like it should have, you should now have a HTTP server running on
http://172.31.1.0/ on your LAN - lets check it out:
So - We can see the default webpage for the nginx container - this means out LoadBalancer implementation has been deployed successfully, and it’s time to clean up our test resources!
To clean, we’ll run these two commands:
$ kubectl delete -f nginx-service-lb.yaml service "nginx-service-lb" deleted $ kubectl delete -f nginx-deployment.yaml deployment "nginx" deleted
In part 4, we’ll deploy the Helm package manager to Kubernetes, and use it to deploy the Nginx ingress controller.