Kubernetes on Hetzner Cloud: Setup a Kubernetes Cluster
This blog post will show how to setup a Kubernetes cluster on Hetzner Cloud servers. The resulting cluster will not (yet) be complete but it will support even persistent volumes (Hetzner Block Storage Volumes). A follow-up post will then describe how we’ll setup an (nginx-) ingress and LoadBalancer (which is not yet available from Hetzner).
I didn’t do this solely on my own, but had a few great sources to depend this post on:
- Christian Beneke: https://community.hetzner.com/tutorials/install-kubernetes-cluster
- blinkeye: https://blinkeye.github.io/post/public/2019-07-25-hetzner-k8s-with-private-network/
- Kubernetes Documentation (which is very good!): https://kubernetes.io/docs/home/
I’ll describe the steps using Fedora 31 as the operating system for the servers. Why Fedora? I’m not that happy anymore with Ubuntu as I was a few years back. So why not give a try and learn the up’s and down’s of another major distro?
What we’ll do
- Setup Kubernetes on a single master - 2 worker node setup, which will cost about 13,24€ a month (without storage). Storage will set you back an additional 4,76€/100GB per month
- API Communication will go through a private network and not over the internet
- You’ll be able to provision volumes based on Hetzner Cloud Block Storage Volumes for data storage
What we won’t do (yet)
- Setup an ingress to serve websites on a public IPv4
- Setup a LoadBalancer (required for ingress)
- Setup a backup strategy to backup your data stored in the cluster and on the volumes
I’ll write about these topics in a few follow-up posts, so you might revisit my blog in a few days/weeks.
- Basic Linux know-how, especially shell commands
- A Hetzner Cloud account
Step 1 - Install the
hcloud cli utility
When working with Hetzner cloud, it might get useful doing most of the work in the shell. This is where the
cli utility comes in handy.
If you’re on a Mac like me, all you’ll have to do is:
brew install hcloud
If you’re not on a Mac, you’ll find how to install it here: https://github.com/hetznercloud/cli
Step 2 - Create the resources on hetzner cloud
We need at least 2-3 servers to do the setup, which we’ll create using the
hcloud utility. We will create and setup
- 1 Hetzner Cloud network
- 1 Hetzner Cloud CX11 server
- 2 Hetzner Cloud CX21 servers
The smaller CX11 server will act as the Kubernetes master and the 2 CX21 servers will be worker nodes.
2.1a - I don’t have a ssh key yet
If you didn’t do it already, add your ssh public key to Hetzner Cloud, so you can setup your servers using a client certificate.
hcloud ssh-key create --name <whateverthenameyoulike> --public-key-from-file ~/.ssh/id_rsa.pub
and write down the number for the created key, because we will need that later.
2.1b - I already have a Hetzner Cloud SSH key
If you already created a ssh key, determine the number/id of that key now, because we will need that shortly.
hcloud ssh-key list
2.2 - Create the network and nodes
Now create the resources on the Hetzner Cloud.
# Create private network (you can alter the ip range of course!. 10.98.0.0 is just a suggestion) hcloud network create --name kubernetes --ip-range 10.98.0.0/16 hcloud network add-subnet kubernetes --network-zone eu-central --type server --ip-range 10.98.0.0/16 # Create Servers (this is where you'll need that ssh key from step 2.1) hcloud server create --type cx11 --name master --image fedora-31 --ssh-key <ssh key id from 2.1> hcloud server create --type cx21 --name worker-1 --image fedora-31 --ssh-key <ssh key id from 2.1> hcloud server create --type cx21 --name worker-2 --image fedora-31 --ssh-key <ssh key id from 2.1> # Attach the servers to the private network (we will use a easy-to-remember ip for the master, but this is optional) hcloud server attach-to-network --network kubernetes --ip 10.98.0.10 master hcloud server attach-to-network --network kubernetes worker-1 hcloud server attach-to-network --network kubernetes worker-2
Now you can ssh into all servers using
hcloud server ssh <name-of-server> # Example hcloud server ssh master
Step 3 - Prepare the systems for installation
Before we install Kubernetes we’ll have to prepare the systems. This applies only to Fedora 31. If you want to use Ubuntu I recommend reading the post from Christian Beneke.
Important! You’ll have to do these steps on every server (master and the 2 nodes).
Fedora comes without bash-completion enabled. You can easily fix this with
dnf install -y nano bash-completion source /etc/profile.d/bash_completion.sh
3.1 Update the system
Before we begin it is recommended to update the system
dnf update -y
3.2 Enable Networking for Kubernetes
cat <<EOF > /etc/sysctl.d/k8s.conf # Allow IP forwarding for kubernetes net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 net.ipv6.conf.all.forwarding = 1 EOF
Create config files for Kubernetes and Docker, which we will install shortly.
cat <<EOF > /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=--cloud-provider=external --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice EOF mkdir -p /etc/systemd/system/docker.service.d/ cat <<EOF > /etc/systemd/system/docker.service.d/00-cgroup-systemd.conf [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd EOF systemctl daemon-reload
3.3 Use cgroups v1 instead of v2
If you’ve read the release notes of Fedora 31 carefully, you might have noticed the switch to cgroups v2. Knowing Docker or Kubernetes rely heavily on cgroups, we will definitely run into problems, because they don’t support cgroups v2 (yet).
So we need to switch back to v1
dnf install -y grubby grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
since it’s a kernel parameter, we need to restart the system completely
3.4 Install Docker
Follow https://docs.docker.com/install/linux/docker-ce/fedora/ to install Docker, which basically is
# Add Repo dnf config-manager \ --add-repo \ https://download.docker.com/linux/fedora/docker-ce.repo # Install packages dnf install -y docker-ce docker-ce-cli containerd.io # Enable the service systemctl enable --now docker
3.5 Install Kubernetes
You can almost completely follow https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ to install Kubernetes, but with a few exceptions.
# Ensure iptables tooling does not use the nftables backend update-alternatives --set iptables /usr/sbin/iptables-legacy # Add repo (difference to the original is the "exclude=kube*" part, since we should do Kubernetes updates manually) cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF # Set SELinux in permissive mode (effectively disabling it) setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # Install the packages for 1.17.3 (difference to the original is the addition of cni, since we we'll need that for flannel) dnf install -y kubelet-1.17.3-0 kubeadm-1.17.3-0 kubectl-1.17.3-0 kubernetes-cni --disableexcludes=kubernetes # Enable the Kubelet systemctl enable --now kubelet
3.6 Pin Versions
dnf upgrade command would upgrade also Docker and Kubernetes Packages. This might not be what we want, since
there are specific steps for each Kubernetes upgrade to go through https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
So it makes sense to pin the installed version to a fixed version and upgrade those packages manually. To prevent dnf from
upgrading all packages you can define excludes in the configuration. But there is another way: dnf-plugin-versionlock.
This is a plugin for dnf to add some packages to a list, which won’t be upgraded when you invoke
dnf install -y dnf-plugin-versionlock
and then we add all those installed packages for Kubernetes and Docker to that lock-list:
dnf versionlock add kube* docker-* containerd*
when we will need to upgrade, one can only remove the version lock
dnf versionlock delete <package>
and install the desired version
dnf install -y kubeadm-1.17.3-0
and add the package to version-lock again afterwards. Do so with every package following the official upgrade guide: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
Step 4 - Initialize the cluster - Install the control plane
If you’ve followed all the steps until here on every server, we have everything we need to initialize the cluster. The following steps you would do only on the master node.
You’ll need the public ip address of your master server and the address on the private network. Since the API server
traffic between master and nodes is not completely secured (see https://kubernetes.io/docs/concepts/architecture/master-node-communication/),
I want to use the private network between all my servers. So we tell
kubeadm to advertise the api-server on the private
ip (10.98.0.10 created in step 2.2), but also enable it for authentication on the public IP of the server. Since I want to deploy
from my Mac (this traffic is encrypted by the self-signed certificate in .kube/config) too.
--ignore-preflight-errors=NumCPU we only need on a CX11 server since it only has one cpu. A kubernetes master recommends
at least 2 cpus for a master.
kubeadm init \ --apiserver-advertise-address=10.98.0.10 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 \ --kubernetes-version=v1.17.3 \ --ignore-preflight-errors=NumCPU \ --apiserver-cert-extra-sans <public ip of your master server>
Now you should get a success info like
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.98.0.10:6443 --token 9bsd1v.jl1351231kk53 \ --discovery-token-ca-cert-hash sha256:aca30f96f58ea1479fe421936e232d5dc46fe19f03fe9d7888821154bb457039
Write down the join-command in the last line, because we will need that later.
Copy the cluster config to your home in order to access the cluster with
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Step 5 - Deploy a container network
To enable the containers/pods to communicate we need a container network (CNI). One option is cilium, but you can choose another: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
Update Jan 20 2020: Flannel caused some problems on my cluster, why? read here
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.7.1/install/kubernetes/quick-install.yaml
Now check if the network is up and running, you can deploy a thorough connectivity test to see if every container on each node can connect to each other using different network policies.
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.7.1/examples/kubernetes/connectivity-check/connectivity-check.yaml # after a while kubectl get pods NAME READY STATUS RESTARTS AGE echo-a-9b85dd869-292s2 1/1 Running 0 8m37s echo-b-c7d9f4686-gdwcs 1/1 Running 0 8m37s host-to-b-multi-node-clusterip-6d496f7cf9-956jb 1/1 Running 0 8m37s host-to-b-multi-node-headless-bd589bbcf-jwbh2 1/1 Running 0 8m37s pod-to-a-7cc4b6c5b8-9jfjb 1/1 Running 0 8m36s pod-to-a-allowed-cnp-6cc776bb4d-2cszk 1/1 Running 0 8m36s pod-to-a-external-1111-5c75bd66db-sxfck 1/1 Running 0 8m35s pod-to-a-l3-denied-cnp-7fdd9975dd-2pp96 1/1 Running 0 8m36s pod-to-b-intra-node-9d9d4d6f9-qccfs 1/1 Running 0 8m35s pod-to-b-multi-node-clusterip-5956c84b7c-hwzfg 1/1 Running 0 8m35s pod-to-b-multi-node-headless-6698899447-xlhfw 1/1 Running 0 8m35s pod-to-external-fqdn-allow-google-cnp-667649bbf6-v6rf8 1/1 Running 0 8m35s
when everything is in status ready, you’re in luck, everything works as expected. You can delete the check
kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/1.7.1/examples/kubernetes/connectivity-check/connectivity-check.yaml
Step 6 - Join worker nodes
Invoke the join command printed in Step 4 on both worker nodes.
kubeadm join 10.98.0.10:6443 --token 9bsd1v.jl1351231kk53 \ --discovery-token-ca-cert-hash sha256:aca30f96f58ea1479fe421936e232d5dc46fe19f03fe9d7888821154bb457039
Step 7 - Deploy the Hetzner Cloud Controller Manager
In order to integrate with Hetzner Cloud they have developed a “Kubernetes Cloud Controller Manager”. This will need access to the Hetzner Cloud, so we need to create another API Token.
To create a Hetzner Cloud API token log in to the web interface, and navigate to your project -> Access -> API tokens and create a new token.
Now we create a secret in Kubernetes with that token. To do that, you’ll have to get the network id we created in step 2.2
hcloud network list
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: hcloud namespace: kube-system stringData: token: "<the api token you created just yet>" network: "<network-id>" EOF
Now we can deploy the controller
kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/v1.5.1.yaml
Step 8 - Deploy Hetzner Cloud Storage (CSI)
To use those PVC you need in most of the applications, you need to deploy a CSI. Hetzner wrote a driver for that as well.
Create or reuse the API token from the previous step and create a secret for the driver.
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: hcloud-csi namespace: kube-system stringData: token: "<the api token you created just yet>" EOF
Now deploy the api and the driver
kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csidriver.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csinodeinfo.yaml kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.2.3/deploy/kubernetes/hcloud-csi.yml
Now you should be able to deploy new applications with storage capability (volumes). But in order to access these, you would need an entry point from the internet like NodePorts, but there is already a better solution: LoadBalancer/Ingress.
The next topic will be how we can access the applications, deployed on Kubernetes.
kubernetes hetzner cloud server fedora
2019-12-22 15:47 +0100 (Last updated: 2019-03-08 15:47 +0100)