Skip to content

kubernetes setup

Davide Conzon edited this page Feb 27, 2021 · 14 revisions

The complete Kubernetes documentation can be found here.

How to setup a kubernetes cluster on multiple machines in Ubuntu 16.04

Requirements

  • A container runtime (like Docker) installed on each machine. If you don't have it, follow this guide to install it.
  • If you want to see the list of available docker images pre-built for the CPSwarm project, you need a (free) DockerHub account, (if you don't have it, please visit https:/hub.docker.com/signup, to create it) and you need to send your Docker ID to davide.conzon@linksfoundation.com so it can be added to the CPSwarm organisation.

Master installation

Working on the machine where you want to install the Master of the cluster:

as root (after running sudo su)

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update

currently the dashboard is not supported in kubernetes version 1.16, if you want to use the dashboard, please install the previous kubernetes version, with these instructions:

apt-mark unhold kubelet && apt-get update && apt-get install -y kubelet=1.15.5-00 && apt-mark hold kubelet
apt-mark unhold kubectl && apt-get install -y kubectl=1.15.5-00 && apt-mark hold kubectl
apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.15.5-00 && apt-mark hold kubeadm

instead, to install the lastest version, run:

apt-get install -y --allow-change-held-packages kubelet kubeadm kubectl 
apt-mark hold kubelet kubeadm kubectl

then to work as a user (please run exit, to exit from root mode), run:

sudo swapoff -a 

(Kubernetes to work properly requires the swap turned off, you can do this by running this command every time you start the node or you can make it permanent modifying the file and to make that change permanent, edit /etc/fstab and remove or comment-out the swap entry).

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<YOUR_IP_ADDRESS>

Add --ignore-preflight-errors=NumCPU if there is the error "the number of available CPUs 1 is less than the required 2"

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now the Kubernetes master is successfully installed. To make other machines join the cluster, run following command to generate the join command (starting with sudo kubeadm join ...), which needs to be executed on the node machine:

sudo kubeadm token create --print-join-command

Then the following instructions are for pod-to-pod network (using flannel among the ones available)

sudo sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

for kubeadm < version 1.16, run:

kubectl apply --filename https://git.io/weave-kube-1.6

from kubeadm version 1.16, run:

kubectl apply --filename https://raw.githubusercontent.com/weaveworks/weave/master/prog/weave-kube/weave-daemonset-k8s-1.9.yaml

If you want to allow to install the pods also in the master, run:

kubectl taint nodes --all node-role.kubernetes.io/master-

You can then restore it using

kubectl taint nodes <master-node> node-role.kubernetes.io/master=:NoSchedule

Check if the node is ready, and list all nodes in the cluster, using

kubectl get nodes

if then node is Not Ready, to check node info, try to use:

kubectl describe node <node-name>

and check if there is some taint on the node, in case of DiskPressure, you can try to free space using:

journalctl --vacuum-time=10d

where 10d is the number of days of logs that you want to maintain

To get all units installed in the cluster, including Pod, Service, Deployment, replicaset, use

kubectl get all

then you can delete one or more of them, use

kubectl delete <NAME>

Dashboard

To install dashboard only for localhost

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

To install dashboard with external access (from guide)

Create self-signed certificate

mkdir $HOME/certs
cd $HOME/certs
openssl genrsa -out dashboard.key 2048
openssl rsa -in dashboard.key -out dashboard.key
openssl req -sha256 -new -key dashboard.key -out dashboard.csr -subj '/CN=localhost'
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

Replace localhost accordingly.

Next, load the certificates into a secret:

kubectl -n kube-system create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs

if the secret already exists

kubectl -n kube-system create secret generic  kubernetes-dashboard-certs --from-file=$HOME/certs --dry-run -o yaml | kubectl apply -f -

Then deploy the dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

Finally, we can expose the dashboard service on a NodePort. This will allow it to be publically accessible via a port forwarded on the Kubernetes hosts.

Edit the kubernetes-dashboard service and change the following options:

  • spec.type from ClusterIP to NodePort
  • Add spec.ports[0].nodePort indicating 32641 or whatever port you want it to be exposed on

running this command and modifying with vim (press i to modify, then ESC -> type :wq to save and exit)

kubectl -n kube-system edit service kubernetes-dashboard

When you save the close the text file, find out which port was allocated:

# kubectl -n kube-system get services
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   ...              <none>        53/UDP,53/TCP   28d
kubernetes-dashboard   NodePort    ...              <none>        443:32641/TCP   27m

Then the dashboad becomes available browsing to ip:port.

Follow this guide to create the user to access the dashboard

Create Admin Service Account

Create below snippet code to dashboard-adminuser.ymland run the kubectl apply command

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
$ kubectl apply -f dashboard-adminuser.yml 
serviceaccount/admin-user created

Create ClusterRoleBinding

In most case, the cluster-admin role should be already exist in the cluster. We can use it and create only ClusterRoleBinding. Copy below code to admin-role-binding.ymlfile and run kubectl apply command.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
$ kubectl apply -f admin-role-binding.yml
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

To access the dashboard you need the token that can be obtained with:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Nodes installation

To add a node to the cluster follow this guide.