I got bored, so I installed Arch Linux, in a VM. So now what? We install Kubernetes, of course.
1. Do stuff
First, you install stuff:
sudo pacman -S kubeadm kubelet containerd kubectl
Then you start stuff:
sudo systemctl enable --now containerd kubelet
Fix sysctl’s:
sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -w net.ipv6.conf.all.forwarding=1
Load br_netfilter
if it isn’t loaded already:
sudo modprobe br_netfilter
Because I wanted a dual-stack cluster, we need to configure kubeadm:
# kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
podSubnet: 10.244.0.0/16,2001:db8:42:0::/56
serviceSubnet: 10.96.0.0/16,2001:db8:42:1::/112
(I created an unique IPv6 prefix though, using this tool)
Then you just need to run kubeadm init
:
sudo kubeadm init --config=kubeadm-config.yaml
If you’re lucky, it works. If you aren’t, just retry until it works.
Run this so you can use kubectl:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Because we have a single-node cluster, we want to schedule pods on control plane nodes (the single node we have):
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
2. Figure out why nothing works
So for some reason Kubernetes control plane and etcd were constantly going up and down. Adding systemd.unified_cgroup_hierarchy=0
to kernel cmdline fixed it (I have no clue what it does).
Sidenote: maybe you just need to follow instructions and configure systemd
cgroup driver in containerd: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd. I haven’t tried it yet because I don’t want to break my working cluster.
3. Networking
Usually you’d install a pod network add-on like Calico or Flannel, but they are overkill for a single-node cluster. With a bit of trial and error I came up with the following CNI configuration:
$ cat /etc/cni/net.d/10-kubernetes.conflist
{
"cniVersion": "0.3.1",
"name": "kubernetes",
"plugins": [
{
"type": "bridge",
"bridge": "kubebr0",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"ranges": [
[{ "subnet": "10.244.0.0/24" }],
[{ "subnet": "2001:db8:42:0::/64" }]
],
"routes": [{ "dst": "10.96.0.0/16" }]
}
},
{
"type": "portmap",
"capabilities": { "portMappings": true },
"externalSetMarkChain": "KUBE-MARK-MASQ"
}
]
}
4. Storage
Easiest is to just use OpenEBS’s Local PV Hostpath provisioner. Install OpenEBS stuff:
kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f https://openebs.github.io/charts/openebs-lite-sc.yaml
Create the StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /var/openebs/local
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Set it as default:
kubectl patch storageclass local-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Sidenote: I want to move to OpenEBS LVM provisioner in the future. When I get around to doing that, I’ll extend this section.
5. Create kubeconfig for another user
What if you don’t want to SSH into your server when you wanna manage your cluster? We can generate certificates for another user.
# user.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# Will be used as the target "cluster" in the kubeconfig
clusterName: "mycluster"
# Will be used as the "server" (IP or DNS name) of this cluster in the kubeconfig
controlPlaneEndpoint: "mycluster.example.com:6443"
# The cluster CA key and certificate will be loaded from this local directory
certificatesDir: "/etc/kubernetes/pki"
sudo kubeadm kubeconfig user --client-name=myuser --config=user.yaml > my_new_cute_kubeconfig.yaml
Because kubeadm enables RBAC, we need to give roles to the new user:
kubectl create clusterrolebinding myuser-admin-binding --clusterrole=cluster-admin --user=myuser
I guess this should be all to get a basic cluster running. In the future I might write about what I’m running on the cluster (and my Terraform setup for that).