Skip to content
This repository has been archived by the owner on Nov 13, 2024. It is now read-only.

j0sh3rs/k3s-at-home

Repository files navigation

README.md

k3s-at-home

... managed with Flux and Renovate, encrypted with Age 🤖

This repository is shamelessly stolen and modified from the great work that Auricom has done on his homelab.

Modified to my needs, updated to use Age instead of PGP, Calico for the CNI in eBPF mode, and using cloudflared and/or ingress-nginx.




k3s pre-commit renovate

Install pre-commit hooks

pre-commit install

Installing K3s

K3s allows you to use either a config file or CLI args + Envvars to customize its installation. I've opted for the CLI args + Envvars, and may consider switching to the config file at a later date.

My installation command looks roughly like such, incorporating as much of the k3s cis hardening guide as possible.

mkdir -p /var/lib/rancher/k3s/server
cat <<-EOHD > /var/lib/rancher/k3s/server/audit.yaml
---
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
EOHD
# cat <<-EOHD > /var/lib/rancher/k3s/server/psa.yaml
# ---
# apiVersion: apiserver.config.k8s.io/v1
# kind: AdmissionConfiguration
# plugins:
# - name: PodSecurity
#   configuration:
#     apiVersion: pod-security.admission.config.k8s.io/v1beta1
#     kind: PodSecurityConfiguration
#     defaults:
#       enforce: "restricted"
#       enforce-version: "latest"
#       audit: "restricted"
#       audit-version: "latest"
#       warn: "restricted"
#       warn-version: "latest"
#     exemptions:
#       usernames: []
#       runtimeClasses: []
#       namespaces: [kube-system, infra]
# EOHD
curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest sh -s - --token ${MYTOKEN} \
--disable local-storage \
--disable traefik \
--disable metrics-server \
--disable servicelb \
--disable-cloud-controller \
--protect-kernel-defaults=true \
--write-kubeconfig-mode 0644 \
--kube-proxy-arg=metrics-bind-address=0.0.0.0 \
--kube-scheduler-arg=bind-address=0.0.0.0 \
--kube-apiserver-arg=audit-log-path=/var/log/k3s/server/audit.log \
--kube-apiserver-arg=audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml \
--kube-apiserver-arg=audit-log-maxage=7 \
--kube-apiserver-arg=audit-log-maxbackup=3 \
--kube-apiserver-arg=audit-log-maxsize=50 \
--kube-apiserver-arg=request-timeout=300s \
--kube-apiserver-arg=service-account-lookup=true \
--kube-controller-manager-arg=terminated-pod-gc-threshold=10 \
--kube-controller-manager-arg=use-service-account-credentials=true \
--kube-controller-manager-arg=bind-address=0.0.0.0 \
--kubelet-arg=streaming-connection-idle-timeout=5m \
--kubelet-arg=make-iptables-util-chains=true \
--kubelet-arg=containerd=/run/k3s/containerd/containerd.sock \
--secrets-encryption \
--flannel-backend=none \
--disable-network-policy \
--disable-kube-proxy \
--cluster-cidr "10.0.0.0/8" \
--etcd-expose-metrics \
--etcd-s3 \
--etcd-s3-endpoint ${S3_ENDPOINT} \
--etcd-s3-access-key ${ETCD_S3_ACCESS_KEY} \
--etcd-s3-secret-key ${ETCD_S3_SECRET_KEY} \
--etcd-s3-bucket k8s \
--etcd-s3-folder backups/etcd \
--cluster-init

For agents, the following was used to connect them:

curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=latest K3S_URL=https://${NODE}:6443 K3S_TOKEN=${MY_TOKEN} sh -s - --protect-kernel-defaults=true --kube-proxy-arg=metrics-bind-address=0.0.0.0 --kubelet-arg=streaming-connection-idle-timeout=5m --kubelet-arg=make-iptables-util-chains=true --kubelet-arg=containerd=/run/k3s/containerd/containerd.sock

Bootstrap Flux

kubectl create ns flux-system # Seed the namespace

cat /root/age.agekey | kubectl create secret generic sops-age --namespace=flux-system --from-file=age.agekey=/dev/stdin # Add the sops key

curl -s https://fluxcd.io/install.sh | sudo bash # ensure flux is latest on host

flux bootstrap github \
  --version=v0.41.2 \
  --owner=j0sh3rs \
  --repository=k3s-at-home \
  --path=bootstrap \
  --personal \
  --network-policy=false

flux bootstrap github \
  --version=v0.41.2 \
  --owner=j0sh3rs \
  --repository=k3s-at-home \
  --path=cluster \
  --personal \
  --network-policy=false

OS Tuning

All agents and nodes are augmented with the following kernel tunings:

fs.inotify.max_user_watches = 1048576
fs.inotify.max_user_instances = 512
fs.inotify.max_queued_events = 16384
vm.max_map_count = 262144
fs.file-max = 1048576
net.ipv4.ip_forward = 1
fs.suid_dumpable=0
kernel.core_pattern=|/bin/false
vm.panic_on_oom=0
vm.overcommit_memory=1
kernel.panic=10
kernel.panic_on_oops=1
kernel.keys.root_maxbytes=25000000
net.core.somaxconn=65535
net.ipv4.tcp_max_syn_backlog=65535
net.core.netdev_max_backlog=65535
net.ipv4.tcp_max_tw_buckets=262144
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_fastopen=3
net.ipv4.tcp_mtu_probing=1
net.core.bpf_jit_enable=1

Further, since the nodes are not exposed to the internet, the following GRUB boot configs have been implemented:

mitigations=off elevator=mq-deadline transparent_hugepage=always

Roadmap

  • Re-implement kured and system-upgrade controller
  • Move to Cilium
  • Figure out better bootstrapping order
    • Conflicts between initial run of flux + CRDs + Prom Monitors + Ingresses
  • Upstream improvements to helm charts as outputs of security tool analysis
    • Popeye
    • Kube-Bench
    • Polaris

About

My home k3s repo

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages