How To Create Debian Based Kubernetes Cluster
A

Lead Engineer @ Packetware

How To Create Debian Based Kubernetes Cluster

Setup networking and system requirements

You will need to do the following on all of the nodes in your planned cluster for system setup. You can also add more nodes later.

Ensure you are logged in as root user to setup the system.

Disabling swap

swapoff -a
nano /etc/fstab

Should look like

# /swapfile none swap sw 0 0

Bridged networking for containers

cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

cat <<EOF | tee /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl --system

1. Kernel Modules

  • overlay: This is a kernel module that provides the overlay filesystem, which allows for efficient management of container filesystems by layering them.
  • br_netfilter: This kernel module is used to enable network filtering on bridge interfaces, which is important for container networking (e.g., to enforce network policies).
  • modprobe overlay: This command loads the overlay module into the kernel immediately.
  • modprobe br_netfilter: This command loads the br_netfilter module into the kernel immediately. By loading these modules, the system is prepared to use the overlay filesystem and to apply network filtering on bridged networks.

2. Kernel Paremeters

  • net.bridge.bridge-nf-call-iptables = 1: This setting tells the kernel to pass bridge traffic to iptables for filtering, which is necessary for enforcing network policies in Kubernetes.
  • net.ipv4.ip_forward = 1: This enables IP forwarding, allowing the system to forward packets between network interfaces. This is crucial for routing traffic between pods in a Kubernetes cluster.
  • net.bridge.bridge-nf-call-ip6tables = 1: This setting ensures that IPv6 bridge traffic is also passed to ip6tables for filtering.
  • sysctl --system: This command reloads all sysctl settings from the configuration files in /etc/sysctl.d/, applying the changes made in the 99-kubernetes-k8s.conf file.

Install Container Runtime Interface (CRI) w/ Containerd

Let's install from our package manager.

apt update
apt install -y containerd

Now we will need to generate the configuration.

You may need to create the /etc/containerd directory with mkdir if it doesn't already exist when installing the package.

containerd config default > /etc/containerd/config.toml
nano /etc/containerd/config.toml

In the TOML section [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] we want to change SystemdCgroup to true. You can use find and replace.

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

Diable AppArmor for now because it breaks the way kubernetes manages containers with runc permissions.

sudo ln -s /etc/apparmor.d/runc /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/runc

Enable it in the init system for subsequent startups and restart it so that our current changes apply.

systemctl enable containerd
systemctl restart containerd

Install Kubernetes Component Packages

Download the GPG key for Kubernetes APT repository based on the version you want.

You may want to install gpg with apt install gnupg if you don't already have the package.

KUBERNETES_VERSION=1.31

mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list

Update your repos list.

apt update
apt install kubelet kubeadm kubectl

We are going to put a hold on these packages to prevent the version from upgrading in the future when you accidentally run the apt upgrade command.

apt-mark hold kubelet kubeadm kubectl

Initialize The Control Plane and Create The Cluster

Now we can create the cluster by picking our control plane node, replace the endpoint listed with the public or private IP of your control plane node depending on the address you want the nodes to communicate with.

ENDPOINT=$(curl ifconfig.me && echo "")

cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS=--node-ip=$ENDPOINT
EOF

kubeadm init --control-plane-endpoint=$ENDPOINT --apiserver-cert-extra-sans=$ENDPOINT --pod-network-cidr=10.0.0.0/8 --node-name control-plane-aws --ignore-preflight-errors Swap

When that finishes we will be able to make a copy of our .kube config.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Lets check to see if the cluster health checks for the components are ready.

kubectl get --raw='/readyz?verbose'

Join Worker Nodes

Using your kubectl config you can copy it your your local machine to interact with the cluster and create a join command for the worker nodes.

kubeadm token create --print-join-command

Type the subsequent command on the worker nodes we prepped earlier in the first section of setting up the system. After that we can check on the nodes in the cluster by running.

kubectl get nodes

Apply Container Network Interface (CNI) w/ Calico

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl get pods -n calico-system