A comprehensive solution for setting up a VM Data Center and Kubernetes cluster on Hetzner baremetal servers with automated provisioning, monitoring, and management capabilities.
This project provides:
- Pre-provisioned Server Support: Works with existing Rocky Linux 9 baremetal servers
- Shared RAID Storage: 1.8TB RAID storage shared across all nodes for K8s and VMs
- Automated K8s Setup: High-availability Kubernetes cluster with 3 control plane nodes
- SSH Key Management: Flexible SSH key and port configuration
- Resource Pooling: Unified resource management across all baremetal servers
- VM Provisioning: GUI-based VM creation with multiple OS options
- Storage Management: Shared RAID storage (1.8TB) with dynamic allocation and monitoring
- Monitoring: Comprehensive monitoring with Grafana dashboards
- User Management: RBAC with namespace isolation
- Scalability: Support for up to 200 baremetal servers and 300 VMs
- β Automated Kubernetes cluster setup with HA control plane
- β Resource pooling from multiple baremetal servers
- β Support for up to 200 baremetal servers
- β Automated VM provisioning (up to 300 VMs)
- β Multiple OS support (CentOS7, RHEL7/8/9, RockyLinux9, Ubuntu20/22/24, OEL8.10)
- β Rancher-based cluster management
- β GUI for adding/removing servers from cluster
- β Resource allocation and scaling per namespace
- β Comprehensive monitoring dashboard
- β Slack/JIRA notification system
- β Health monitoring for all baremetal servers
- β User management with admin/user roles
- β Namespace-based resource isolation
- β
Default user
acceldatawith SSH key on all VMs/containers - β RBAC configuration
βββ terraform/ # Infrastructure as Code
β βββ hetzner/ # Hetzner provider configuration
β βββ kubernetes/ # K8s cluster setup
β βββ monitoring/ # Monitoring infrastructure
βββ ansible/ # Configuration management
β βββ playbooks/ # Ansible playbooks
β βββ roles/ # Reusable roles
β βββ inventory/ # Server inventory
βββ kubernetes/ # K8s manifests and configs
β βββ cluster-setup/ # Cluster initialization
β βββ monitoring/ # Monitoring stack
β βββ applications/ # Application deployments
βββ monitoring/ # Monitoring configuration
β βββ prometheus/ # Prometheus configs
β βββ grafana/ # Grafana dashboards
β βββ alertmanager/ # Alerting rules
βββ vm-provisioning/ # VM management system
β βββ api/ # REST API for VM operations
β βββ frontend/ # Web GUI
β βββ templates/ # VM templates
βββ scripts/ # Utility scripts
βββ docs/ # Documentation
- Terraform >= 1.5.0
- Ansible >= 2.12.0
- kubectl >= 1.28.0
- Helm >= 3.12.0
- Docker >= 20.10.0
- SSH access to baremetal servers
- Operating System: Rocky Linux 9 (latest)
- CPU: Minimum 2 cores, recommended 4+ cores
- RAM: Minimum 4GB, recommended 8GB+
- Storage: Minimum 40GB, recommended 100GB+
- Network: Stable network connectivity
- Access: Root or sudo access to all servers
- SSH: SSH access with private key
-
Clone the repository
git clone <repository-url> cd hetzner-dc-k8s-setup
-
Configure environment
cp .env.example .env # Edit .env with your server IPs and SSH configuration -
Prepare servers (if not already done)
# Follow the Rocky Linux setup guide # See docs/rocky-linux-setup.md for detailed instructions
-
Configure infrastructure
cd terraform terraform init terraform plan terraform apply -
Setup Kubernetes cluster
cd ../ansible ansible-playbook -i inventory/hosts.yml playbooks/prepare-servers.yml ansible-playbook -i inventory/hosts.yml playbooks/setup-k8s-cluster.yml -
Deploy monitoring stack
cd ../kubernetes kubectl apply -f monitoring/ -
Access Rancher dashboard
kubectl port-forward svc/rancher-server 8080:80 # Open http://localhost:8080
The monitoring stack includes:
- Prometheus: Metrics collection and storage
- Grafana: Visualization and dashboards
- AlertManager: Alert routing and notification
- Node Exporter: Baremetal metrics
- cAdvisor: Container metrics
Access Grafana at http://<grafana-service>:3000 (default: admin/admin)
This project integrates with:
- Rancher: Cluster management and GUI
- Helm: Package management
- kubectl: Command-line interface
- k9s: Terminal-based UI
- Telepresence: Local development integration
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For support and questions:
- Create an issue in the GitHub repository
- Check the troubleshooting guide
- Review the FAQ
- v1.0.0: Initial release with basic K8s cluster setup
- v1.1.0: Added VM provisioning capabilities
- v1.2.0: Enhanced monitoring and alerting
- v1.3.0: GUI improvements and user management