How to install Kubernetes with Microk8s and Rancher

How to install Kubernetes with Microk8s and Rancher

Introduction

After having worked with Kubernetes for a client for around half a year, the natural progression was to learn how to setup a Kubernetes instance myself. There are many different ways to setup a production grade Kubernetes, but I found that the easiest way to get started was using the Microk8s tool that Canonical maintains. The main advantage of this Kubernetes version, is that it is lightweight and easy to setup, especially if you are installing Kubernetes on one or more Ubuntu servers. Other alternatives that I considered was Rancher's RKE and K3, both of which are viable alternatives.

In this tutorial, I will show you how to install Kubernetes on a single node with Rancher. One can afterwards easily add more nodes to one's Kubernetes cluster by following this guide.

Overview

  1. Install Kubernetes using Microk8s
  2. Install Rancher
  3. Patch Kubernetes and Rancher

Installation of Kubernetes

First, start with a fresh installation of Ubuntu Server. I am using version 20.04 LTS.

Thereafter, install microk8s using snap. We will use Kubernetes 1.18 - which I know works with this version of Rancher and cert-manager.

sudo snap install microk8s --classic --channel=1.18/stable

Open firewall for Kubernetes pods to communicate with each other and the internet:

sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed

Now that microk8s is install, we have access to the microk8s commands through the shell. Before we can use it, we need to give our userprofile permissions to access the command. Give yourself permissions:

sudo usermod -a -G microk8s [your username]
sudo chown -f -R [your username] ~/.kube
newgrp microk8s

In order to check that microk8s is installed and running, go ahead and run:

microk8s status

Which should print out the something along the lines of:

> microk8s is running
>
>  addons:
>     enabled:
>     disabled:
>        fluentd:         # Elasticsearch-Fluentd-Kibana logging and monitoring
>         gpu:             # Automatic enablement of Nvidia CUDA
>         helm:            # Leverage Helm charts to manage your Kubernetes apps
>         ingress:         # Ingress controller for external access.

Now Kubernetes Enable addons

sudo microk8s enable dns storage helm3 ingress

Wait for all of the pods to come up.

## OBS: If ingress does not go up properly, reboot the server.
watch -n 1 microk8s kubectl get all --all-namespaces

Allow containers to run in priviliged

sudo sh -c 'echo "--allow-privileged=true" >> /var/snap/microk8s/current/args/kube-apiserver'
sudo systemctl restart snap.microk8s.daemon-apiserver.service

Installation of Rancher

The Rancher installation comprises of 2 different installation:

  1. Cert-manager - creates and manages the certificates that each rancher pod uses.
  2. Rancher itself

Install cert-manager for Rancher

microk8s.helm3 repo add jetstack https://charts.jetstack.io

microk8s kubectl create namespace cert-manager

microk8s helm3 install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v1.5.0 \
  --set installCRDs=true

Wait until all pods are deployed.

watch -n 1 microk8s kubectl get all --all-namespaces

Install Rancher

microk8s helm3 repo add rancher-stable https://releases.rancher.com/server-charts/stable

microk8s helm3 repo update

microk8s kubectl create namespace cattle-system

microk8s helm3 install rancher rancher-stable/rancher \
   --namespace cattle-system \
   --set hostname=${HOSTNAME} \
   --set replicas=3 \
   --version 2.5.9

After 1-3 minutes, Rancher should have been deployed. It can be accessed through the browser by navigating to HOSTNAME, localhost:80 or localhost:443.

Patch Rancher and Kubernetes

When installing Rancher using Helm, there are some of the CRD's which are not properly managed by Helm. Therefore, we need to patch these resources, but we have to wait for the resources to be created before we patch it.

Therefore wait 5 min after running the "helm3 install rancher"-command before running this:

{
   microk8s kubectl patch crd clusters.rancher.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd clusters.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"rancher-operator-crd"}}}'
   microk8s kubectl patch crd clusters.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"rancher-operator-system"}}}'

   microk8s kubectl patch crd projects.rancher.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd projects.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"rancher-operator-crd"}}}'
   microk8s kubectl patch crd projects.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"rancher-operator-system"}}}'

   microk8s kubectl patch crd roletemplates.rancher.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd roletemplates.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"rancher-operator-crd"}}}'
   microk8s kubectl patch crd roletemplates.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"rancher-operator-system"}}}'

   microk8s kubectl patch crd roletemplatebindings.rancher.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd roletemplatebindings.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"rancher-operator-crd"}}}'
   microk8s kubectl patch crd roletemplatebindings.rancher.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"rancher-operator-system"}}}'

   microk8s kubectl patch crd bundles.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd bundles.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd bundles.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd bundledeployments.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd bundledeployments.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd bundledeployments.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd bundlenamespacemappings.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd bundlenamespacemappings.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd bundlenamespacemappings.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd clustergroups.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd clustergroups.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd clustergroups.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd clusters.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd clusters.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd clusters.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd clusterregistrationtokens.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd clusterregistrationtokens.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd clusterregistrationtokens.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd gitrepos.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd gitrepos.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd gitrepos.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd clusterregistrations.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd clusterregistrations.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd clusterregistrations.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd gitreporestrictions.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd gitreporestrictions.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd gitreporestrictions.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'

   microk8s kubectl patch crd contents.fleet.cattle.io -p '{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
   microk8s kubectl patch crd contents.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-name":"fleet-crd"}}}'
   microk8s kubectl patch crd contents.fleet.cattle.io -p '{"metadata":{"annotations":{"meta.helm.sh/release-namespace":"fleet-system"}}}'
}

Finally, depending on your network, one might need to patch the DNS configuration. Some linux setups are managed in such a way, that they cannot use the public Google DNS, which is default for the DNS addon provided in Microk8s.

Read the following file:

cat /etc/netplan/99-netcfg-vmware.yaml

Look at the address(es) of the nameserver(s):

>  nameservers:
>    search:
>      - my.domain.org
>    addresses:
>      - XX.XX.XX.XX #Address of nameserver
>      - YY.YY.YY.YY #Potentially other address of nameserver

Create a file named coredns.yaml with the following content:

apiVersion: v1
data:
  Corefile: |-
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        log . {
          class error
        }
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . XX.XX.XX.XX YY.YY.YY.YY #<-- Insert the nameserver addresses here
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    k8s-app: kube-dns
  name: coredns
  namespace: kube-system

Apply the yaml file to Kubernetes:

sudo microk8s kubectl apply -f coredns.yaml

Redeploy the coredns deployment:

microk8s kubectl rollout restart deployment/coredns -n kube-system

And that's it. You're DONE! You should be able to see over the next hour, that Rancher should finish setting everything up. In the meantime, you can test out deployments through the Rancher UI.

(Optional: Fix throttling issue)

Sometimes, we see that the Kubernetes cache is restricted. This throttles the Kube-apiserver and makes commands unbearably slow. An easy fix to this is by granting all users access to read and write to the cache:

sudo chmod 777 -R ~/.kube/cache
# One might make a more restrictive approach instead of chmod 777 for security purposes.

(Uninstall Kubernetes)

If one wishes to remove the whole installation, this can most easily be done with:

sudo snap remove --purge microk8s