Kush-Web-Services is my Homelab GitOps and IaC project. This is the second verison of the cluster which uses tools like: Ansible, Terraform, Kubernetes, Flux and GitHub. My main purpose was to practice K8S, GitOps and RPIs and to build my own infra for deploying my own staff :)
- Ansible - Server configuration
- Cert-Manager - SSL certificates management
- Cloudflare - Domain and DNS management
- External-DNS - SOON... (Syncing DNS records with CF)
- FluxCD - My Git repo and cluster syncing
- Helm - Package manager for Kubernetes
- Kubernetes - Container orchestration
- Longhorn - Distributed block storage for peristent storage
- MetalLB - Load Balancer for bare metal Kubernetes clusters
- SOPS - Encrypted secrets in Git
- Terraform - VM provisioning
In the /cluster
dir you'll find the following:
- base: directory is the entrypoint to Flux.
- crds: directory contains custom resource definitions (CRDs) that need to exist globally in your cluster before anything else exists.
- core: directory (depends on crds) are important infrastructure applications (grouped by namespace) that should never be pruned by Flux.
- apps: directory (depends on core) is where your common applications (grouped by namespace) could be placed, Flux will prune resources here if they are not tracked by Git anymore.
./cluster
├── ./apps
├── ./base
├── ./core
└── ./crds
Explore the /cluster/apps
folder, you will see cluster's namespaces with all the apps inside.
- dev - self hosted dev tools
- media - a full media server (Plex, Sonarr, Radarr, qBittorrent, and more!)
- monitoring - the known monitoring stack (Grafana, Prometheus, Alerts Manager)
- networking - mainly Traefik ingress manager and cloudflare
- and there are many more to come...
some missing charts are deployed via my helm repo: kws-charts
Device | Count | OS & Data Disk Size | Ram | Operating System | Purpose |
---|---|---|---|---|---|
Raspberry Pi 4b | 1 | 512GB SSD (longhorn) | 8GB | Ubuntu 22.04 | Kubernetes (k3s) Masters |
Raspberry Pi 4b | 2 | 512GB SSD (longhorn) | 8GB | Ubuntu 22.04 | Kubernetes (k3s) Workers |
Raspberry Pi 4b | 1 | 512GB SSD | 8GB | Raspi OS 64GB | Monitoring device |
Raspberry Pi 4b | 1 | 2TB SSD | 8GB | OMV | NAS - Network Attached Server |
Raspberry Pi Zero 2W | 1 | N/A | 512MB | Raspi OS lite 32GB | Home DNS Server - PiHole |
Sandisk Extreme SSD | 3 | 500GB | N/A | N/A | Distributed persistent storage |
Sandisk Extreme SSD | 1 | 500GB | N/A | N/A | Monitor storage drive |
Sandisk Extreme SSD | 1 | 2TB | N/A | N/A | NAS storage drive |
WD - My Book HDD | 1 | 3TB | N/A | N/A | Backup external HDD for NAS |
TP-Link TL-SG1005P V2 | 1 | N/A | N/A | N/A | Network switch with PoE support |
APC Back 1200VA/650Watts | 1 | N/A | N/A | N/A | UPS |
SOON - Router | 1 | 1000 mbps |
As an educative project I suggest starting with the manual work, explore Kubernetes, Raspberry PIs, Docker's and build one of your own from scratch.
For those who wants to go to the next stage - yes! you can install the same cluster on your machines! I'll provide the original Kubernetes @Home's instructions along with my additions in the /docs
folder.
There are 3 types of backups should be done on the cluster:
- Longhorn PV/PVCs backups
- ETCD state backup
- NAS data backup
For more details go to /docs/DR-plan.md
Thanks to the great supportive community at Kubernetes @Home. Most of the support and inspiration for my cluster came from onedr0p and from his super cluster.