This is a k3s kubernetes cluster playground wrapped in a Vagrant environment.
Configure the host machine hosts
file with:
10.11.0.4 registry.example.test
10.11.0.30 s.example.test
10.11.0.50 traefik.example.test
10.11.0.50 kubernetes-dashboard.example.test
10.11.0.50 kubernetes-hello.example.test
10.11.0.50 argocd.example.test
Install the base Debian 12 (Bookworm) vagrant box.
Optionally, start the rgl/gitlab-vagrant environment at ../gitlab-vagrant
. If you do this, this environment will have the gitlab-runner helm chart installed in the k8s cluster.
Optionally, connect the environment to the physical network through the host br-lan
bridge. The environment assumes that the host bridge was configured as:
sudo -i
# review the configuration in the files at /etc/netplan and replace them all
# with a single configuration file:
ls -laF /etc/netplan
upstream_interface=eth0
upstream_mac=$(ip link show $upstream_interface | perl -ne '/ether ([^ ]+)/ && print $1')
cat >/etc/netplan/00-config.yaml <<EOF
network:
version: 2
renderer: networkd
ethernets:
eth0: {}
bridges:
br-lan:
# inherit the MAC address from the enslaved eth0 interface.
# NB this is required in machines that have intel AMT with shared IP
# address to prevent announcing multiple MAC addresses (AMT and OS
# eth0) for the same IP address.
macaddress: $upstream_mac
#link-local: []
dhcp4: false
addresses:
- 192.168.1.11/24
routes:
- to: default
via: 192.168.1.254
nameservers:
addresses:
- 192.168.1.254
search:
- lan
interfaces:
- $upstream_interface
EOF
netplan apply
And open the Vagrantfile
, uncomment and edit the block that starts at
bridge_name
with your specific network details. Also ensure that the
hosts
file has the used IP addresses.
Launch the environment:
time vagrant up --no-destroy-on-error --no-tty --provider=libvirt
NB When the NUMBER_OF_AGENT_NODES
Vagrantfile
variable value is above 0
, the server nodes (e.g. s1
) are tainted to prevent them from executing non control-plane workloads. That kind of workload is executed in the agent nodes (e.g. a1
).
Show containerd information:
vagrant ssh s1
sudo -i
crictl version
crictl info
exit
exit
Access the cluster from the host:
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl cluster-info
kubectl get nodes -o wide
Execute an example workload:
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl apply -f example.yml
kubectl rollout status deployment/example
kubectl get ingresses,services,pods,deployments
example_ip="$(kubectl get ingress/example -o json | jq -r .status.loadBalancer.ingress[0].ip)"
example_fqdn="$(kubectl get ingress/example -o json | jq -r .spec.rules[0].host)"
example_url="http://$example_fqdn"
curl --resolve "$example_fqdn:80:$example_ip" "$example_url"
echo "$example_ip $example_fqdn" | sudo tee -a /etc/hosts
curl "$example_url"
xdg-open "$example_url"
kubectl delete -f example.yml
Execute an example WebAssembly (Wasm) workload:
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl apply -f example-spin.yml
kubectl rollout status deployment/example-spin
kubectl get ingresses,services,pods,deployments
example_spin_ip="$(kubectl get ingress/example-spin -o json | jq -r .status.loadBalancer.ingress[0].ip)"
example_spin_fqdn="$(kubectl get ingress/example-spin -o json | jq -r .spec.rules[0].host)"
example_spin_url="http://$example_spin_fqdn"
curl --resolve "$example_spin_fqdn:80:$example_spin_ip" "$example_spin_url"
echo "$example_spin_ip $example_spin_fqdn" | sudo tee -a /etc/hosts
curl "$example_spin_url"
xdg-open "$example_spin_url"
kubectl delete -f example-spin.yml
Execute the kubernetes-hello workload, it uses Role, RoleBinding, ConfigMap, Secret, ServiceAccount, and Service Account token volume projection (a JSON Web Token and OpenID Connect (OIDC) ID Token) that target a different audience:
export KUBECONFIG=$PWD/tmp/admin.conf
wget -qO- https://github.com/rgl/kubernetes-hello/raw/master/resources.yml \
| perl -pe 's,(\s+host: kubernetes-hello)\..+,\1.example.test,' \
> tmp/kubernetes-hello.yml
kubectl apply -f tmp/kubernetes-hello.yml
kubectl rollout status daemonset/kubernetes-hello
kubectl get ingresses,services,pods,daemonset
kubernetes_hello_ip="$(kubectl get ingress/kubernetes-hello -o json | jq -r .status.loadBalancer.ingress[0].ip)"
kubernetes_hello_fqdn="$(kubectl get ingress/kubernetes-hello -o json | jq -r .spec.rules[0].host)"
kubernetes_hello_url="http://$kubernetes_hello_fqdn"
echo "kubernetes_hello_url: $kubernetes_hello_url"
curl --resolve "$kubernetes_hello_fqdn:80:$kubernetes_hello_ip" "$kubernetes_hello_url"
kubectl delete -f tmp/kubernetes-hello.yml
Access the example nginx
ArgoCD application service (managed by ArgoCD as the
nginx
ArgoCD Application):
nginx_ip="$(kubectl get service/nginx -o json | jq -r .status.loadBalancer.ingress[0].ip)"
nginx_url="http://$nginx_ip"
echo "nginx_url: $nginx_url"
curl "$nginx_url"
List this repository dependencies (and which have newer versions):
GITHUB_COM_TOKEN='YOUR_GITHUB_PERSONAL_TOKEN' ./renovate.sh
Access the Traefik Dashboard at:
https://traefik.example.test/dashboard/
Access the Rancher Server at:
https://s.example.test:6443
NB This is a proxy to the k8s API server (which is running in port 6444).
NB You must use the client certificate that is inside the tmp/admin.conf
,
tmp/*.pem
, or /etc/rancher/k3s/k3s.yaml
(inside the s1
machine) file.
Access the rancher server using the client certificate with httpie:
http \
--verify tmp/default-ca-crt.pem \
--cert tmp/default-crt.pem \
--cert-key tmp/default-key.pem \
https://s.example.test:6443
Or with curl:
curl \
--cacert tmp/default-ca-crt.pem \
--cert tmp/default-crt.pem \
--key tmp/default-key.pem \
https://s.example.test:6443
Access the Kubernetes Dashboard at:
https://kubernetes-dashboard.example.test
Then select Token
and use the contents of tmp/admin-token.txt
as the token.
You can also launch the kubernetes API server proxy in background:
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl proxy &
And access the kubernetes dashboard at:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
The K9s console UI dashboard is also installed in the server node. You can access it by running:
vagrant ssh s1
sudo su -l
k9s
The Zot Registry is installed in the registry node and can be accessed at:
Get the admin
user password:
echo "Argo CD admin password: $(cat tmp/argocd-admin-password.txt)"
Access the web interface:
Show the configuration:
kubectl get -n argocd configmap/argocd-cmd-params-cm -o yaml
Set the AWS credentials secret:
# NB for testing purposes, you can copy these from the AWS Management Console.
cat >tmp/aws-credentials.txt <<'EOF'
[default]
aws_access_key_id = <AWS_ACCESS_KEY_ID>
aws_secret_access_key = <AWS_SECRET_ACCESS_KEY>
#aws_session_token = <AWS_SESSION_TOKEN>
EOF
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl delete secret/aws-credentials \
--namespace crossplane-system
kubectl create secret generic aws-credentials \
--namespace crossplane-system \
--from-file credentials=tmp/aws-credentials.txt
Create an S3 bucket:
# see https://marketplace.upbound.io/providers/upbound/provider-aws-s3/v1.11.0/resources/s3.aws.upbound.io/Bucket/v1beta2
# NB Bucket is cluster scoped.
# see kubectl get crd buckets.s3.aws.upbound.io -o yaml
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl create -f - <<'EOF'
apiVersion: s3.aws.upbound.io/v1beta2
kind: Bucket
metadata:
name: crossplane-hello-world
spec:
forProvider:
region: eu-west-1
tags:
owner: rgl
providerConfigRef:
name: default
EOF
List the created bucket:
kubectl get buckets
Describe the created bucket:
kubectl describe bucket/crossplane-hello-world
Using the AWS CLI, list the S3 buckets:
AWS_CONFIG_FILE=tmp/aws-credentials.txt aws s3 ls
Delete the created bucket:
kubectl delete bucket/crossplane-hello-world
- k3s has a custom k8s authenticator module that does user authentication from
/var/lib/rancher/k3s/server/cred/passwd
.