Skip to content

vm broker vanilla k8s cluster on multiple instances, same node

Allan Roger Reid edited this page Mar 29, 2024 · 8 revisions

Create PRIVILEGED k8s-master and 4 k8s-slave[0-3] using UI - on the same node.

image

This is critical to establish a local network between the k8s nodes. This is the case until an OVN solution is implemented.

Old issue: Also, use the non-SSL minio.training domain since there is an issue auto loading certificates to create a TLS enabled cluster. Domain lab.min.dev forces usage of TLS. See error:

ubuntu@k8s-master:~$ kubectl -n minio-operator logs pod/console-66664b78c7-v4dps
E: 2023/11/03 20:17:09 Unable to load certs: unable to create certs CA directory at /tmp/certs/CAs: failed with mkdir /tmp/certs/CAs: read-only file system
Serving operator at http://[::]:9090

NOTE: This is fixed by

kubectl patch deployment -n minio-operator console -p '{"spec":{"template":{"spec":{"volumes":[{"name": "cas", "emptyDir": {}}]}}}}'
kubectl patch deployment -n minio-operator console -p '{"spec":{"template":{"spec":{"containers":[{"name": "console", "image": "minio/operator:v5.0.10", "volumeMounts":[{"name": "cas", "mountPath": "/tmp/certs/CAs"}]}]}}}}'
image

For both k8s-master and 4 k8s-slave[0-3] perform the following:

Persist sessions thru reboots and finalized terminal sessions

loginctl enable-linger ubuntu 

Install linux kernel modules for k8s

sudo apt-get update -y && \
sudo apt-get upgrade -y && \
sudo apt-get install linux-generic -y && \
sudo apt-get dist-upgrade -y && \
sudo apt-get install linux-headers-generic -y && \
sudo touch /dev/kmsg
sudo ln -s /lib/modules/6.2.0-39-generic /lib/modules/6.2.0-35-generic
sudo ln -s /lib/modules/6.2.0-39-generic /lib/modules/6.5.0-25-generic

Validate architecture before starting

See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Show architecture. This should output some value e.g.

dpkg --print-architecture
#amd64

Get ip. This should output some value e.g.

ip a | grep inet
  inet 127.0.0.1/8 scope host lo
  inet6 ::1/128 scope host 
  inet 10.76.176.209/24 brd 10.76.176.255 scope global dynamic eth0
  inet6 fd42:e60f:1e70:7eb1:216:3eff:fe49:fa35/64 scope global mngtmpaddr 
  inet6 fe80::216:3eff:fe49:fa35/64 scope link 

Use machine id. This should output some value e.g.

cat /etc/machine-id 
1a25f9f618d542b79de3556f3ba53696

Install a CRI

https://kubernetes.io/docs/setup/production-environment/container-runtimes/

Install and configure prerequisites

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay && sudo modprobe br_netfilter && sudo modprobe nf_conntrack

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system && lsmod | grep br_netfilter && lsmod | grep overlay && sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

Install containerd using systemd

See https://github.com/containerd/containerd/blob/main/docs/getting-started.md

wget https://github.com/containerd/containerd/releases/download/v1.7.8/containerd-1.7.8-linux-amd64.tar.gz && sudo tar Cxzvf /usr/local containerd-1.7.8-linux-amd64.tar.gz && wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service && sudo mkdir -p /usr/local/lib/systemd/system && sudo mv containerd.service /usr/local/lib/systemd/system/containerd.service

Install RUNC

sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && sudo install -m 755 runc.amd64 /usr/local/sbin/runc

Install CNI plugins

sudo wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz && sudo mkdir -p /opt/cni/bin && sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz 

Generate config file

sudo mkdir -p /etc/containerd && sudo touch /etc/containerd/config.toml && sudo chmod 666 /etc/containerd/config.toml && sudo containerd config default > /etc/containerd/config.toml
Configure the systemd cgroup driver
sudo vi /etc/containerd/config.toml

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

		[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
		  ...
		  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
		    SystemdCgroup = true

Also set in this case. This is to ensure configurations match. This may not be needed. Watch for an error in the following step which may require this step.

		[plugins."io.containerd.grpc.v1.cri"]
		  ...
    	sandbox_image = "registry.k8s.io/pause:3.9"

Restart

sudo systemctl restart containerd && sudo systemctl status containerd

Install kubelet kubeadm kubectl

sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl gpg && curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg && echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubectl && sudo apt-mark hold kubelet kubeadm kubectl

For only the k8s-master (control plane) perform the following

Create k8s cluster

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

If there is an issue run:

sudo kubeadm reset

Output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.62.75.75:6443 --token cuulzw.ql3r8a7hl5yc6m0r --discovery-token-ca-cert-hash sha256:38ecb9f1bbab3e1bde3492efc8f6396c349bc59d2029d55ec99a8895d888cd42 
kubectl apply -f https://github.com/flannel-io/flannel/releases/download/v0.23.0/kube-flannel.yml

Edit configmap

This to avoid Error running ProxyServer" err="open /proc/sys/net/netfilter/nf_conntrack_max: permission denied

kubectl edit configmap/kube-proxy -n kube-system

Change maxPerCore: null to maxPerCore: 0

For only the k8s-slave[0-3] perform the following

Join the k8s cluster

Run the following:

kubeadm join 10.62.75.75:6443 --token cuulzw.ql3r8a7hl5yc6m0r --discovery-token-ca-cert-hash sha256:38ecb9f1bbab3e1bde3492efc8f6396c349bc59d2029d55ec99a8895d888cd42 

May need to add missing file

/run/flannel/subnet.env

FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

Validate

Wait for joining then check status on k8s-master

kubectl get nodes -o wide

Install minio-operator.

Once all nodes are ready, then still on k8s-master run:

(
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

kubectl krew update
kubectl krew install minio

kubectl minio version

wget https://github.com/minio/operator/releases/download/v5.0.10/kubectl-minio_5.0.10_linux_amd64
sudo mv kubectl-minio_5.0.10_linux_amd64 /usr/local/bin/kubectl-minio
sudo chmod +x /usr/local/bin/kubectl-minio

kubectl minio version

kubectl minio init

kubectl patch service -n minio-operator console -p '{"spec":{"ports":[{"name": "http","port": 9090,"protocol": "TCP","nodePort":31000}],"type": "NodePort"}}'

Access the operator using http://k8s-master.minio.training:31000

Use jwt

kubectl -n minio-operator  get secret console-sa-secret -o jsonpath="{.data.token}" | base64 --decode

For the tenant install a storage class e.g.

Install a local path provisioner

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Apply a storage class

cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF

Create a tenant

image
Clone this wiki locally