-
On all nodes where k0s is going to be installed
- Create new partition
- Mount it as /var/lib/k0s
-
Hostnames naming convention (Reason: easily traceable afterwards from k8s cluster) -
prefix-<role><n>-<ip>
- role - worker/controller
- n - node id - optional
- ip - static ip dash separated
# Example hostname for controller sudo hostnamectl set-hostname moriawalls-debian-controller1-192-168-68-3 # Example hostname for worker sudo hostnamectl set-hostname theshire-ubuntu-worker1-192-168-68-4
-
? To update /etc/hosts and match hostname for 127.0.1.1
sudo curl -sSLf https://get.k0s.sh | sudo sh
sudo k0s install controller --enable-worker -c ./k0s.yaml
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
- Generate Token for worker
sudo k0s token create --role=worker
- Add token to a file called
k0s-worker<n>-token-file
where n is the worker enumeration - Deploy worker on target node
sudo k0s install worker --token-file /home/na/k0s-worker01-token-file
-
install kubectl
-
gpg sigining key, add repo, install
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get uppdate sudo apt-get install kubectl
-
-
Go to controller and there
cat /var/lib/k0s/pki/admin.conf
-
Copy the content to management host under user's home dir under
.kube/config
-
Modify cluster.server from
localhost:6443
to<controller ip>:6443
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /etc/apt/keyrings/helm.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get uppdate
sudo apt-get install kubectl
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
-
Preferred way is to deploy using helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace ingress-nginx --set controller.service.type=NodePort --set controller.service.nodePorts.http=32080 --set controller.service.nodePorts.https=32443
-
Option deploy using kubectl
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/baremetal/deploy.yaml
To delete nginx ingress after deploying with kubectl
kubectl delete all --all -n ingress-ngin
Note:
Cleaning up floating around related resourcess after deploying nginx ingress using kubectl is tedious to say the least This is why it is recommended to deploy nginx ingress using Helm which does much better job of cleaning up after removal
Note: This is an important step! otherwise nfs based Persistent Volume will not funtion
Indication that this package is missing on you workre node/s look for the following on a Deployment
describe
for a Deployment containing a pod requiring nfs mount:Mounting command: mount Mounting arguments: -t nfs XXXXXXXXXXX /var/lib/k0s/kubelet/pods/669bd77e-3874-414e-b014-b7e95e0833cb/volumes/kubernetes.io~nfs/nfs-pv/... Output: mount: /var/lib/k0s/kubelet/pods/669bd77e-3874-414e-b014-b7e95e0833cb/volumes/kubernetes.io~nfs/nfs-pv: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
helm repo add budibase https://budibase.github.io/budibase/
helm repo update
# budibase-k0s-values.yaml is Helm Values file that instruct Helm not to deploy ingress-nginx but use existing one
cd helm-post-render/kustomize
helm install --create-namespace -f ../../budibase-k0s-values.yaml --post-renderer ./kustomize --version 2.3.6 --namespace budibase budibase budibase/budibase
> kubectl get ingress -n budibase
NAME CLASS HOSTS ADDRESS PORTS AGE
budibase-budibase <none> * 192.168.68.4 80 6h26m
> kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.100.84.85 <none> 80:30277/TCP,443:30020/TCP 6h38m
ingress-nginx-controller-admission ClusterIP 10.99.217.233 <none> 443/TCP 6h38m
-
To pull helm archive
helm pull budibase/budibase --destination ./dev/git/helm/ --untar
-
Developing a Kustomize patch
-
Pull source
cd ~/dev/git/helm/budibase4k0s helm pull budibase/budibase --destination .
-
To check the outcome of the patch
# Comment all lines in kustomiztion.yaml but the all.yaml line helm template my-test ../budibase/ --post-renderer ./kustomize --debug --dry-run > out1.yaml # Un comment all lines in kustomiztion.yaml helm template my-test ../budibase/ --post-renderer ./kustomize --debug --dry-run > out2.yaml diff out1.yaml out2.yaml
-
Show details on the chart installed (version etc.)
helm show chart budibase/budibase
-
-
Find Pod by its IP (relies on jq utility being installed)
kubectl get --all-namespaces --output json pods | jq '.items[] | select(.status.podIP=="10.244.1.49")' | jq .metadata.name
-
Restart deployment (i.e. nginx-controller)
kubectl rollout restart deployment budibase-ingress-nginx-controller -n budibase
-
nginx--controller error port 80 is already in use. Please check the flag --http-port when using budibase's Helm chart supplied nginx ingress
-
Interim solution I found was to update nginx controller from 1.1.0 to 1.1.3
-
Patch deployment:
kubectl set image deployment/budibase-ingress-nginx-controller controller=k8s.gcr.io/ingress-nginx/controller:v1.1.3@sha256:31f47c1e202b39fadecf822a9b76370bd4baed199a005b3e7d4d1455f4fd3fe2 --record
-
If need be - force restart:
kubectl patch deployment budibase-ingress-nginx-controller -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
-
-
In the end I dropped this solution in favor of cluster wide ingress-nginx installation using Helm
-
-
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "[meta.helm.sh/release-name](https://meta.helm.sh/release-name)" ...
-
Look if IngressClasses are flowting around - if it does delete
kubectl get Ingressclass --all-namespaces
-
Look if nginx related ClusterRoleBindings are floating aroung - if it does delete
kubectl get clusterrolebindings | grep nginx
-
Look if nginx related ClusterRoles are floating around - if it does delete
kubectl get clusterroles | grep nginx
-
Look if there are any nginx related validatingwebhookconfigurations - if it does delete
# Search kubectl get validatingwebhookconfigurations # Delete kubectl delete validatingwebhookconfigurations ingress-nginx-admission
-
-
rejected since it cannot handle ["BIG_CREATION"]
- This is evident in logs within couchdb pod
- The pod contains couchdb container and sidecar container with clouseau which provides lucene search capabilities to couchdb 3.x
- The error is due to incompatible ErLang protocol between the two.
- Solution: To patch helm chart downgrading couchdb from v3.2.1 to v3.1.2
- Nodeport vs Hostport
- Kubernetes ingress-nginx installation guide
- K8s ingresss-nginx Helm chart values
- ingress-nginx bare-metal considerations
- Patch Any Helm Chart Template Using A Kustomize Post-Renderer
- Helm Kustomize example
- Kubernetes Kustomize with JsonPatches6902
- JavaScript Object Notation (JSON) Patch 6902
- NFS Persistence Storage Basics
- NFS based Persistent Volume in Kubernetes example
- JSON Patch (JsonPatches6902) Builder Online
- Convert YAML to JSON online
- Convert JSON to YAML online
- Modify containers without rebuilding their image
- Budibasae - backups
- Create missing system DBs in couchdb
- Apache couchdb - release notes
- My own Budibase app sidecar for admin