This is an Ansible playbook to initialize and deploy a K8 cluster with airgap requirements. It will download all artifacts from a jumper that has access to the internet and to the airgap VMs.
Artifacts that will be downloaded by the playbook:
- rke2 binary
- bootstrap helm charts saved as tar
- gitea
- longhorn
- argocd
- Container Images saved as tar
- Helm Charts post bootstrap which will be deployed by ArgoCD
Based on the Inventory file, it is able to configure a multi-node cluster. A template for this inventory file has been provided: ansible/inventory.ini.template
. Modify as needed, add or remove VMs. Until now, this Ansible playbook has only been tested with 3 master nodes and 2 agents.
Also, in /artifacts/rke2/nodes/<hostname>
, you will find examples of the config.yaml
required by RKE2 to configure each node. Make sure to use the hostname of each VM as the folder name for each config file.
- Download bootstrap helm charts as tar specified under the var
helmCharts
. - Download all helm charts used under
/argocd/manifest/argo-cd-helm-chart/values.yaml
. - Use helm template against each helm chart tar to get the available list of container images to pull and save as tar. Container images as well can be added as part of the var
docker_images
. - Pull Container Image from var
docker_images
and store them inansible/rke2/bootstrap/images
. - Replace values from the argo-cd-helm-chart
values.yaml
to provide an airgap path, meaning use the Gitea repo instead of the public repository of helm chart and use the master branch instead of version. - Archive the repo
argocd/manifest
. - Download RKE2 artifacts.
- Upload all artifacts to each node, especially the container image artifacts as RKE2 imports all tars that are under
/var/lib/rancher/rke2/agent/images/
. With this, we avoid the need for having a registry and retagging each container image, and replacing this in thevalues.yaml
of the apps of app helm chart. - Install RKE2 in master nodes one at a time.
- Install RKE2 in agents in parallel.
- Install Longhorn, as our storage class.
- Install Gitea, as our repository where we will store an apps of app helm chart and use ArgoCD to deploy it.
- Install ArgoCD, as our airgap CD, which will read Gitea hosted in the same cluster as the source of truth.
Apply the whole playbook with this command:
ansible-playbook -i inventory.ini rke2.yml
To Sync helm charts added to the ArgoCD value file, use the tag: airgap-values
ansible-playbook -i inventory.ini rke2.yml --tags airgap-values
List of Requirements to run an Ansible playbook:
- inventory.ini file
- config.yaml file for each node inside
/artifacts/rke2/nodes/<hostname>
- yq (v4.41.1)
- ansible (2.16.3)
- helm (v3.13.3)
For testing purposes, you could use this Environment Variable to skip accepting the fingerprint from each host:
export ANSIBLE_HOST_KEY_CHECKING=False
If planning to use Opentofu to spin up some VMs in AWS:
- terragrunt (0.54.22)
- opentofu (v1.6.1)
Environment Variables required to run terragrunt:
export TERRAGRUNT_TFPATH=tofu
export TF_VAR_public_ip="x.x.x.x/32"
Your public IP to add as a whitelist IP in the security group of the nodes.