diff --git a/.github/workflows/ci-non-go.sh b/.github/workflows/ci-non-go.sh index 40d3d47..06ac991 100755 --- a/.github/workflows/ci-non-go.sh +++ b/.github/workflows/ci-non-go.sh @@ -18,7 +18,7 @@ if ! shfmt -f . | xargs shfmt -s -l -d; then failed=1 fi -if ! rufo vagrant/Vagrantfile; then +if ! rufo stack/vagrant/Vagrantfile; then failed=1 fi diff --git a/.gitignore b/.gitignore index 997ca2f..9f5727b 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,6 @@ -.vagrant \ No newline at end of file +.vagrant +error.log +.task +.state +capt/output/ +.vscode/ \ No newline at end of file diff --git a/README.md b/README.md index 0fca442..c353569 100644 --- a/README.md +++ b/README.md @@ -1,40 +1,7 @@ # Playground -The playground is an example deployment of the Tinkerbell stack for use in learning and testing. It is not a production reference architecture. -Please use the [Helm chart](https://github.com/tinkerbell/charts) for production deployments. +Welcome to the Tinkerbell Playground! This playground repository holds example deployments for use in learning and testing. +The following playgrounds are available: -## Quick-Starts - -The following quick-start guides will walk you through standing up the Tinkerbell stack. -There are a few options for this. -Pick the one that works best for you. - -## Options - -- [Vagrant and VirtualBox](docs/quickstarts/VAGRANTVBOX.md) -- [Vagrant and Libvirt](docs/quickstarts/VAGRANTLVIRT.md) -- [Kubernetes](docs/quickstarts/KUBERNETES.md) - -## Next Steps - -By default the Vagrant quickstart guides automatically install Ubuntu on the VM (machine1). You can provide your own OS template. To do this: - -1. Login to the stack VM - - ```bash - vagrant ssh stack - ``` - -1. Add your template. An example Template object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/template.yaml) and more Template documentation can be found [here](https://tinkerbell.org/docs/concepts/templates/). - - ```bash - kubectl apply -f my-OS-template.yaml - ``` - -1. Create the workflow. An example Workflow object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/workflow.yaml). - - ```bash - kubectl apply -f my-custom-workflow.yaml - ``` - -1. Restart the machine to provision (if using the vagrant playground test machine this is done by running `vagrant destroy -f machine1 && vagrant up machine1`) +- [Tinkerbell stack playground](stack/README.md) +- [Cluster API Provider Tinkerbell (CAPT) playground](capt/README.md) diff --git a/capt/README.md b/capt/README.md new file mode 100644 index 0000000..b9744b1 --- /dev/null +++ b/capt/README.md @@ -0,0 +1,87 @@ +# Cluster API Provider Tinkerbell (CAPT) Playground + +The Cluster API Provider Tinkerbell (CAPT) is a Kubernetes Cluster API provider that uses Tinkerbell to provision machines. You can find more information about CAPT [here](https://github.com/tinkerbell/cluster-api-provider-tinkerbell). The CAPT playground is an example deployment for use in learning and testing. It is not a production reference architecture. + +## Getting Started + +The CAPT playground is a tool that will create a local CAPT deployment and a single workload cluster. This includes creating and installing a Kubernetes cluster (KinD), the Tinkerbell stack, all CAPI and CAPT components, Virtual machines that will be used to create the workload cluster, and a Virtual BMC server to manage the VMs. + +Start by reviewing and installing the [prerequisites](#prerequisites) and understanding and customizing the [configuration file](./config.yaml) as needed. + +## Prerequisites + +### Binaries + +- [Libvirtd](https://wiki.debian.org/KVM) >= libvirtd (libvirt) 8.0.0 +- [Docker](https://docs.docker.com/engine/install/) >= 24.0.7 +- [Helm](https://helm.sh/docs/intro/install/) >= v3.13.1 +- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) >= v0.20.0 +- [clusterctl](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) >= v1.6.0 +- [kubectl](https://www.downloadkubernetes.com/) >= v1.28.2 +- [virt-install](https://virt-manager.org/) >= 4.0.0 +- [task](https://taskfile.dev/installation/) >= 3.37.2 + +### Hardware + +- at least 60GB of free and very fast disk space (etcd is very disk I/O sensitive) +- at least 8GB of free RAM +- at least 4 CPU cores + +## Usage + +Start by looking at the [`config.yaml`](./config.yaml) file. This file contains the configuration for the playground. You can customize the playground by changing the values in this file. We recommend you start with the defaults to get familiar with the playground before customizing. + +Create the CAPT playground: + +```bash +# Run the creation process and follow the outputted next steps at the end of the process. +task create-playground +``` + +Delete the CAPT playground: + +```bash +task delete-playground +``` + +## Next Steps + +With the playground up and running and a workload cluster created, you can run through a few CAPI lifecycle operations. + +### Move/pivot the Tinkerbell stack and CAPI/CAPT components to a workload cluster + +To be written. + +### Upgrade the management cluster + +To be written. + +### Upgrade the workload cluster + +To be written. + +### Scale out the workload cluster + +To be written. + +### Scale in the workload cluster + +To be written. + +## Known Issues + +### DNS issue + +KinD on Ubuntu has a known issue with DNS resolution in KinD pod containers. This affect the Download of HookOS in the Tink stack helm deployment. There are a few [known workarounds](https://github.com/kubernetes-sigs/kind/issues/1594#issuecomment-629509450). The recommendation for the CAPT playground is to add a DNS nameservers to Docker's `daemon.json` file. This can be done by adding the following to `/etc/docker/daemon.json`: + +```json +{ + "dns": ["1.1.1.1"] +} +``` + +Then restart Docker: + +```bash +sudo systemctl restart docker +``` diff --git a/capt/Taskfile.yaml b/capt/Taskfile.yaml new file mode 100644 index 0000000..fd6cbbf --- /dev/null +++ b/capt/Taskfile.yaml @@ -0,0 +1,128 @@ +version: "3" + +includes: + create: ./tasks/Taskfile-create.yaml + delete: ./tasks/Taskfile-delete.yaml + vbmc: ./tasks/Taskfile-vbmc.yaml + capi: ./tasks/Taskfile-capi.yaml + +vars: + OUTPUT_DIR: + sh: echo $(yq eval '.outputDir' config.yaml) + CURR_DIR: + sh: pwd + STATE_FILE: ".state" + STATE_FILE_FQ_PATH: + sh: echo {{joinPath .CURR_DIR .STATE_FILE}} + +tasks: + create-playground: + silent: true + summary: | + Create the CAPT playground. Use the .playground file to define things like cluster size and Kubernetes version. + cmds: + - task: system-deps-warnings + - task: validate-binaries + - task: ensure-output-dir + - task: generate-state + - task: create:playground-ordered + - task: next-steps + + delete-playground: + silent: true + summary: | + Delete the CAPT playground. + cmds: + - task: validate-binaries + - task: delete:playground + + validate-binaries: + silent: true + summary: | + Validate all required dependencies for the CAPT playground. + cmds: + - for: + [ + "virsh", + "docker", + "helm", + "kind", + "kubectl", + "clusterctl", + "virt-install", + "brctl", + "yq", + ] + cmd: command -v {{ .ITEM }} >/dev/null || echo "'{{ .ITEM }}' was not found in the \$PATH, please ensure it is installed." + # sudo apt install virtinst # for virt-install + # sudo apt install bridge-utils # for brctl + + system-deps-warnings: + summary: | + Run CAPT playground system warnings. + silent: true + cmds: + - echo "Please ensure you have the following:" + - echo "60GB of free and very fast disk space (etcd is very disk I/O sensitive)" + - echo "8GB of free RAM" + - echo "4 CPU cores" + + ensure-output-dir: + summary: | + Create the output directory. + cmds: + - mkdir -p {{.OUTPUT_DIR}} + - mkdir -p {{.OUTPUT_DIR}}/xdg + status: + - echo ;[ -d {{.OUTPUT_DIR}} ] + - echo ;[ -d {{.OUTPUT_DIR}}/xdg ] + + generate-state: + summary: | + Populate the state file. + sources: + - config.yaml + generates: + - .state + cmds: + - ./scripts/generate_state.sh config.yaml .state + + next-steps: + silent: true + summary: | + Next steps after creating the CAPT playground. + vars: + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + NODE_BASE: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KIND_KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - | + echo + echo The workload cluster is now being created. + echo Once the cluster nodes are up and running, you will need to deploy a CNI for the cluster to be fully functional. + echo The management cluster kubeconfig is located at: {{.KIND_KUBECONFIG}} + echo The workload cluster kubeconfig is located at: {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig + echo + echo 1. Watch and wait for the first control plane node to be provisioned successfully: STATE_SUCCESS + echo "KUBECONFIG={{.KIND_KUBECONFIG}} kubectl get workflows -n {{.NAMESPACE}} -w" + echo + echo + echo 2. Watch and wait for the Kubernetes API server to be ready and responding: + echo "until KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl get node; do echo 'Waiting for Kube API server to respond...'; sleep 5; done" + echo + echo 3. Deploy a CNI + echo Cilium + echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig cilium install" + echo or KUBEROUTER + echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml" + echo + echo 4. Watch and wait for all nodes to join the cluster and be ready: + echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl get nodes -w" + - touch {{.OUTPUT_DIR}}/.next-steps-displayed + status: + - echo ;[ -f {{.OUTPUT_DIR}}/.next-steps-displayed ] diff --git a/capt/config.yaml b/capt/config.yaml new file mode 100644 index 0000000..9cd0255 --- /dev/null +++ b/capt/config.yaml @@ -0,0 +1,29 @@ +--- +clusterName: "capt-playground" +outputDir: "output" +namespace: "tink" +counts: + controlPlanes: 1 + workers: 1 + spares: 3 +versions: + capt: 0.5.3 + chart: 0.4.5 + kube: v1.28.3 + os: 20.04 + kubevip: 0.8.0 +os: + registry: ghcr.io/jacobweinstock/capi-images + distro: ubuntu + sshKey: "" +vm: + baseName: "node" + cpusPerVM: 2 + memInMBPerVM: 2048 + diskSizeInGBPerVM: 10 + diskPath: "/tmp" +virtualBMC: + containerName: "virtualbmc" + image: ghcr.io/jacobweinstock/virtualbmc + user: "root" + pass: "calvin" diff --git a/capt/scripts/create_vms.sh b/capt/scripts/create_vms.sh new file mode 100755 index 0000000..7d400a7 --- /dev/null +++ b/capt/scripts/create_vms.sh @@ -0,0 +1,34 @@ +#!/bin/bash + +set -euo pipefail + +# Create VMs + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare BRIDGE_NAME="$(yq eval '.kind.bridgeName' "$STATE_FILE")" + declare CPUS="$(yq eval '.vm.cpusPerVM' "$STATE_FILE")" + declare MEM="$(yq eval '.vm.memInMBPerVM' "$STATE_FILE")" + declare DISK_SIZE="$(yq eval '.vm.diskSizeInGBPerVM' "$STATE_FILE")" + declare DISK_PATH="$(yq eval '.vm.diskPath' "$STATE_FILE")" + + while IFS=$',' read -r name mac; do + # create the VM + virt-install \ + --description "CAPT VM" \ + --ram "$MEM" --vcpus "$CPUS" \ + --os-variant "ubuntu20.04" \ + --graphics "vnc" \ + --boot "uefi,firmware.feature0.name=enrolled-keys,firmware.feature0.enabled=no,firmware.feature1.name=secure-boot,firmware.feature1.enabled=yes" \ + --noautoconsole \ + --noreboot \ + --import \ + --connect "qemu:///system" \ + --name "$name" \ + --disk "path=$DISK_PATH/$name-disk.img,bus=virtio,size=10,sparse=yes" \ + --network "bridge:$BRIDGE_NAME,mac=$mac" + done < <(yq e '.vm.details.[] | [key, .mac] | @csv' "$STATE_FILE") +} + +main "$@" diff --git a/capt/scripts/generate_bmc.sh b/capt/scripts/generate_bmc.sh new file mode 100755 index 0000000..9a2a2f6 --- /dev/null +++ b/capt/scripts/generate_bmc.sh @@ -0,0 +1,29 @@ +#!/bin/bash + +set -euo pipefail + +# This script creates the BMC machine yaml files needed for the CAPT playground. + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + + rm -f "$OUTPUT_DIR"/bmc-machine*.yaml + + namespace=$(yq eval '.namespace' "$STATE_FILE") + bmc_ip=$(yq eval '.virtualBMC.ip' "$STATE_FILE") + + while IFS=$',' read -r name port; do + export NODE_NAME="$name" + export BMC_IP="$bmc_ip" + export BMC_PORT="$port" + export NAMESPACE="$namespace" + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" "$OUTPUT_DIR"/bmc-machine-"$NODE_NAME".yaml + unset NODE_NAME + unset BMC_IP + unset BMC_PORT + unset NAMESPACE + done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") +} + +main "$@" diff --git a/capt/scripts/generate_hardware.sh b/capt/scripts/generate_hardware.sh new file mode 100755 index 0000000..99a7568 --- /dev/null +++ b/capt/scripts/generate_hardware.sh @@ -0,0 +1,32 @@ +#!/bin/bash + +# Generate hardware + +set -euo pipefail + +function main() { + # Generate hardware + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + declare -r NS=$(yq eval '.namespace' "$STATE_FILE") + + rm -f "$OUTPUT_DIR"/hardware*.yaml + + while IFS=$',' read -r name mac role ip gateway; do + export NODE_NAME="$name" + export NODE_MAC="$mac" + export NODE_ROLE="$role" + export NODE_IP="$ip" + export GATEWAY_IP="$gateway" + export NAMESPACE="$NS" + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" "$OUTPUT_DIR"/hardware-"$NODE_NAME".yaml + unset NODE_ROLE + unset NODE_NAME + unset NODE_IP + unset NODE_MAC + unset GATEWAY_IP + done < <(yq e '.vm.details.[] | [key, .mac, .role, .ip, .gateway] | @csv' "$STATE_FILE") + +} + +main "$@" diff --git a/capt/scripts/generate_secret.sh b/capt/scripts/generate_secret.sh new file mode 100755 index 0000000..a83b1da --- /dev/null +++ b/capt/scripts/generate_secret.sh @@ -0,0 +1,18 @@ +#!/bin/bash + +# Generate secret. All machines share the same secret. The only customization is the namespace, user name, and password. + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + export NAMESPACE=$(yq eval '.namespace' "$STATE_FILE") + export BMC_USER_BASE64=$(yq eval '.virtualBMC.user' "$STATE_FILE" | tr -d '\n' | base64) + export BMC_PASS_BASE64=$(yq eval '.virtualBMC.pass' "$STATE_FILE" | tr -d '\n' | base64) + + envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" "$OUTPUT_DIR"/bmc-secret.yaml + unset BMC_USER_BASE64 + unset BMC_PASS_BASE64 + unset NAMESPACE +} + +main "$@" diff --git a/capt/scripts/generate_state.sh b/capt/scripts/generate_state.sh new file mode 100755 index 0000000..941cd49 --- /dev/null +++ b/capt/scripts/generate_state.sh @@ -0,0 +1,132 @@ +#!/bin/bash +# This script generates the state data needed for creating the CAPT playground. + +# state file spec +cat </dev/null +--- +clusterName: "capt-playground" +outputDir: "/home/tink/repos/tinkerbell/cluster-api-provider-tinkerbell/playground/output" +namespace: "tink" +counts: + controlPlanes: 1 + workers: 1 + spares: 1 +versions: + capt: 0.5.3 + chart: 0.4.4 + kube: v1.28.8 + os: 22.04 +os: + registry: reg.weinstocklabs.com/tinkerbell/cluster-api-provider-tinkerbell + distro: ubuntu + sshKey: "" + version: "2204" +vm: + baseName: "node" + cpusPerVM: 2 + memInMBPerVM: 2048 + diskSizeInGBPerVM: 10 + diskPath: "/tmp" + details: + node1: + mac: 02:7f:92:bd:2d:57 + bmc: + port: 6231 + role: control-plane + ip: 172.18.10.21 + gateway: 172.18.0.1 + node2: + mac: 02:f3:eb:c1:aa:2b + bmc: + port: 6232 + role: worker + ip: 172.18.10.22 + gateway: 172.18.0.1 + node3: + mac: 02:3c:e6:70:1b:5e + bmc: + port: 6233 + role: spare + ip: 172.18.10.23 + gateway: 172.18.0.1 +virtualBMC: + containerName: "virtualbmc" + image: ghcr.io/jacobweinstock/virtualbmc + user: "root" + pass: "calvin" + ip: 172.18.0.3 +totalNodes: 3 +kind: + kubeconfig: /home/tink/repos/tinkerbell/cluster-api-provider-tinkerbell/playground/output/kind.kubeconfig + gatewayIP: 172.18.0.1 + nodeIPBase: 172.18.10.20 + bridgeName: br-d086780dac6b +tinkerbell: + vip: 172.18.10.74 +cluster: + controlPlane: + vip: 172.18.10.75 + podCIDR: 172.100.0.0/16 +EOF + +set -euo pipefail + +function generate_mac() { + declare NODE_NAME="$1" + + echo "$NODE_NAME" | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/02:\1:\2:\3:\4:\5/' +} + +function main() { + # read in the config.yaml file and populate the .state file + declare CONFIG_FILE="$1" + declare STATE_FILE="$2" + + # update outputDir to be a fully qualified path + output_dir=$(yq eval '.outputDir' "$CONFIG_FILE") + if [[ $output_dir == /* ]]; then + echo + else + current_dir=$(pwd) + output_dir="$current_dir/$output_dir" + fi + config_file=$(realpath "$CONFIG_FILE") + state_file="$STATE_FILE" + + cp -a "$config_file" "$state_file" + yq e -i '.outputDir = "'$output_dir'"' "$state_file" + + # totalNodes + total_nodes=$(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file") + $(yq eval '.counts.spares' "$state_file"))) + yq e -i ".totalNodes = $total_nodes" "$state_file" + + # populate vmNames + base_name=$(yq eval '.vm.baseName' "$state_file") + base_ipmi_port=6230 + for i in $(seq 1 $total_nodes); do + name="$base_name$i" + mac=$(generate_mac "$name") + yq e -i ".vm.details.$name.mac = \"$mac\"" "$state_file" + yq e -i ".vm.details.$name.bmc.port = $((base_ipmi_port + i))" "$state_file" + # set the node role + if [[ $i -le $(yq eval '.counts.controlPlanes' "$state_file") ]]; then + yq e -i ".vm.details.$name.role = \"control-plane\"" "$state_file" + elif [[ $i -le $(($(yq eval '.counts.controlPlanes' "$state_file") + $(yq eval '.counts.workers' "$state_file"))) ]]; then + yq e -i ".vm.details.$name.role = \"worker\"" "$state_file" + else + yq e -i ".vm.details.$name.role = \"spare\"" "$state_file" + fi + unset name + unset mac + done + + # populate kind.kubeconfig + yq e -i '.kind.kubeconfig = "'$output_dir'/kind.kubeconfig"' "$state_file" + + # populate the expected OS version in the raw image name (22.04 -> 2204) + os_version=$(yq eval '.versions.os' "$state_file") + os_version=$(echo "$os_version" | tr -d '.') + yq e -i '.os.version = "'$os_version'"' "$state_file" +} + +main "$@" diff --git a/capt/scripts/update_state.sh b/capt/scripts/update_state.sh new file mode 100755 index 0000000..f27a647 --- /dev/null +++ b/capt/scripts/update_state.sh @@ -0,0 +1,48 @@ +#!/bin/bash + +set -euo pipefail + +# this script updates the state file with the generated hardware data + +function main() { + declare -r STATE_FILE="$1" + declare CLUSTER_NAME=$(yq eval '.clusterName' "$STATE_FILE") + declare GATEWAY_IP=$(docker inspect -f '{{ .NetworkSettings.Networks.kind.Gateway }}' "$CLUSTER_NAME"-control-plane) + declare NODE_IP_BASE=$(awk -F"." '{print $1"."$2".10.20"}' <<<"$GATEWAY_IP") + declare NODE_BASE=$(yq eval '.vm.baseName' "$STATE_FILE") + declare IP_LAST_OCTET=$(echo "$NODE_IP_BASE" | cut -d. -f4) + + yq e -i '.kind.gatewayIP = "'$GATEWAY_IP'"' "$STATE_FILE" + yq e -i '.kind.nodeIPBase = "'$NODE_IP_BASE'"' "$STATE_FILE" + + # set an ip and gateway per node + idx=1 + while IFS=$',' read -r name; do + v=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx)) + ((idx++)) + yq e -i ".vm.details.$name.ip = \"$v\"" "$STATE_FILE" + yq e -i ".vm.details.$name.gateway = \"$GATEWAY_IP\"" "$STATE_FILE" + unset v + done < <(yq e '.vm.details.[] | [key] | @csv' "$STATE_FILE") + + # set the Tinkerbell Load Balancer IP (VIP) + offset=50 + t_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset)) + yq e -i '.tinkerbell.vip = "'$t_lb'"' "$STATE_FILE" + + # set the cluster control plane load balancer IP (VIP) + cp_lb=$(echo "$NODE_IP_BASE" | awk -F"." '{print $1"."$2"."$3}').$((IP_LAST_OCTET + idx + offset + 1)) + yq e -i '.cluster.controlPlane.vip = "'$cp_lb'"' "$STATE_FILE" + + # set the cluster pod cidr + POD_CIDR=$(awk -F"." '{print $1".100.0.0/16"}' <<<"$GATEWAY_IP") + yq e -i '.cluster.podCIDR = "'$POD_CIDR'"' "$STATE_FILE" + + # set the KinD bridge name + network_id=$(docker network inspect -f '{{.Id}}' kind) + bridge_name="br-${network_id:0:12}" + yq e -i '.kind.bridgeName = "'$bridge_name'"' "$STATE_FILE" + +} + +main "$@" diff --git a/capt/scripts/virtualbmc.sh b/capt/scripts/virtualbmc.sh new file mode 100755 index 0000000..d36b7be --- /dev/null +++ b/capt/scripts/virtualbmc.sh @@ -0,0 +1,22 @@ +#!/bin/bash + +set -euo pipefail + +# This script will registry and start virtual bmc entries in a running virtualbmc container + +function main() { + declare -r STATE_FILE="$1" + declare -r OUTPUT_DIR=$(yq eval '.outputDir' "$STATE_FILE") + + username=$(yq eval '.virtualBMC.user' "$STATE_FILE") + password=$(yq eval '.virtualBMC.pass' "$STATE_FILE") + + container_name=$(yq eval '.virtualBMC.containerName' "$STATE_FILE") + while IFS=$',' read -r name port; do + docker exec "$container_name" vbmc add --username "$username" --password "$password" --port "$port" "$name" + docker exec "$container_name" vbmc start "$name" + done < <(yq e '.vm.details.[] | [key, .bmc.port] | @csv' "$STATE_FILE") + +} + +main "$@" diff --git a/capt/tasks/Taskfile-capi.yaml b/capt/tasks/Taskfile-capi.yaml new file mode 100644 index 0000000..0a82ab8 --- /dev/null +++ b/capt/tasks/Taskfile-capi.yaml @@ -0,0 +1,152 @@ +version: "3" + +tasks: + ordered: + summary: | + CAPI tasks run in order of dependency. + cmds: + - task: create-cluster-yaml + - task: init + - task: generate-cluster-yaml + - task: create-kustomize-file + - task: apply-kustomization + + create-cluster-yaml: + run: once + summary: | + Create the cluster yaml. + env: + CAPT_VERSION: + sh: yq eval '.versions.capt' {{.STATE_FILE_FQ_PATH}} + vars: + OUTPUT_DIR: + sh: echo $(yq eval '.outputDir' config.yaml) + cmds: + - envsubst '$CAPT_VERSION' < templates/clusterctl.tmpl > {{.OUTPUT_DIR}}/clusterctl.yaml + status: + - grep -q "$CAPT_VERSION" {{.OUTPUT_DIR}}/clusterctl.yaml + + init: + run: once + deps: [create-cluster-yaml] + summary: | + Initialize the cluster. + env: + TINKERBELL_IP: + sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} + CLUSTERCTL_DISABLE_VERSIONCHECK: true + XDG_CONFIG_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CONFIG_DIRS: "{{.OUTPUT_DIR}}/xdg" + XDG_STATE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CACHE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_RUNTIME_DIR: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_DIRS: "{{.OUTPUT_DIR}}/xdg" + vars: + OUTPUT_DIR: + sh: echo $(yq eval '.outputDir' config.yaml) + KIND_GATEWAY_IP: + sh: yq eval '.kind.gatewayIP' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" clusterctl --config {{.OUTPUT_DIR}}/clusterctl.yaml init --infrastructure tinkerbell + status: + - expected=1; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get pods -n capt-system |grep -ce "capt-controller"); [[ "$got" == "$expected" ]] + + generate-cluster-yaml: + run: once + deps: [init] + summary: | + Generate the cluster yaml. + env: + CONTROL_PLANE_VIP: + sh: yq eval '.cluster.controlPlane.vip' {{.STATE_FILE_FQ_PATH}} + POD_CIDR: + sh: yq eval '.cluster.podCIDR' {{.STATE_FILE_FQ_PATH}} + CLUSTERCTL_DISABLE_VERSIONCHECK: true + XDG_CONFIG_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CONFIG_DIRS: "{{.OUTPUT_DIR}}/xdg" + XDG_STATE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_CACHE_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_RUNTIME_DIR: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_HOME: "{{.OUTPUT_DIR}}/xdg" + XDG_DATA_DIRS: "{{.OUTPUT_DIR}}/xdg" + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + OUTPUT_DIR: + sh: yq eval '.outputDir' config.yaml + KUBE_VERSION: + sh: yq eval '.versions.kube' {{.STATE_FILE_FQ_PATH}} + CP_COUNT: + sh: yq eval '.counts.controlPlanes' {{.STATE_FILE_FQ_PATH}} + WORKER_COUNT: + sh: yq eval '.counts.workers' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" clusterctl generate cluster {{.CLUSTER_NAME}} --config {{.OUTPUT_DIR}}/clusterctl.yaml --kubernetes-version "{{.KUBE_VERSION}}" --control-plane-machine-count="{{.CP_COUNT}}" --worker-machine-count="{{.WORKER_COUNT}}" --target-namespace={{.NAMESPACE}} --write-to {{.OUTPUT_DIR}}/prekustomization.yaml + status: + - grep -q "{{.KUBE_VERSION}}" {{.OUTPUT_DIR}}/prekustomization.yaml + + create-kustomize-file: + run: once + summary: | + Kustomize file for the CAPI generated config file (prekustomization.yaml). + env: + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + OS_REGISTRY: + sh: yq eval '.os.registry' {{.STATE_FILE_FQ_PATH}} + OS_DISTRO: + sh: yq eval '.os.distro' {{.STATE_FILE_FQ_PATH}} + OS_VERSION: + sh: yq eval '.os.version' {{.STATE_FILE_FQ_PATH}} + VERSIONS_OS: + sh: yq eval '.versions.os' {{.STATE_FILE_FQ_PATH}} + SSH_AUTH_KEY: + sh: yq eval '.os.sshKey' {{.STATE_FILE_FQ_PATH}} + KUBE_VERSION: + sh: yq eval '.versions.kube' {{.STATE_FILE_FQ_PATH}} + TINKERBELL_VIP: + sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBEVIP_VERSION: + sh: yq eval '.versions.kubevip' {{.STATE_FILE_FQ_PATH}} + CONTROL_PLANE_VIP: + sh: yq eval '.cluster.controlPlane.vip' {{.STATE_FILE_FQ_PATH}} + CONF_PATH: # https://github.com/kube-vip/kube-vip/issues/684 + sh: "[[ $(echo {{.KUBE_VERSION}} | awk -F. '{print $2}') -gt 28 ]] && echo /etc/kubernetes/super-admin.conf || echo /etc/kubernetes/admin.conf" + vars: + KUBE_VERSION: + sh: yq eval '.versions.kube' {{.STATE_FILE_FQ_PATH}} + OUTPUT_DIR: + sh: yq eval '.outputDir' config.yaml + sources: + - config.yaml + generates: + - "{{.OUTPUT_DIR}}/kustomization.yaml" + cmds: + - envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/kustomization.tmpl > {{.OUTPUT_DIR}}/kustomization.yaml + + apply-kustomization: + run: once + deps: [generate-cluster-yaml, create-kustomize-file] + summary: | + Kustomize the cluster yaml. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + sources: + - "{{.OUTPUT_DIR}}/kustomization.yaml" + - "{{.OUTPUT_DIR}}/prekustomization.yaml" + generates: + - "{{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml" + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl kustomize {{.OUTPUT_DIR}} -o {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml diff --git a/capt/tasks/Taskfile-create.yaml b/capt/tasks/Taskfile-create.yaml new file mode 100644 index 0000000..f655b67 --- /dev/null +++ b/capt/tasks/Taskfile-create.yaml @@ -0,0 +1,227 @@ +version: "3" + +includes: + vbmc: ./Taskfile-vbmc.yaml + capi: ./Taskfile-capi.yaml + +tasks: + playground-ordered: + silent: true + summary: | + Create the CAPT playground. + cmds: + - task: kind-cluster + - task: update-state + - task: deploy-tinkerbell-helm-chart + - task: vbmc:start-server + - task: vbmc:update-state + - task: hardware-cr + - task: bmc-machine-cr + - task: bmc-secret + - task: vms + - task: vbmc:start-vbmcs + - task: apply-bmc-secret + - task: apply-bmc-machines + - task: apply-hardware + - task: capi:ordered + - task: allow-customization + - task: create-workload-cluster + - task: get-workload-cluster-kubeconfig + + allow-customization: + prompt: The Workload cluster is ready to be provisioned. Execution is paused to allow for any User customizations. Press `y` to continue to Workload cluster creation. Press `n` to exit the whole process. + cmds: + - echo 'Creating Workload cluster' + + kind-cluster: + run: once + summary: | + Install a KinD cluster. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - kind create cluster --name {{.CLUSTER_NAME}} --kubeconfig "{{.KUBECONFIG}}" + - until KUBECONFIG="{{.KUBECONFIG}}" kubectl wait --for=condition=ready node --all --timeout=5m; do echo "Waiting for nodes to be ready..."; sleep 1; done + status: + - KUBECONFIG="{{.KUBECONFIG}}" kind get clusters | grep -q {{.CLUSTER_NAME}} + + update-state: + silent: true + run: once + deps: [kind-cluster] + summary: | + Update the state file with the KinD cluster information. Should be run only after the KinD cluster is created. + cmds: + - ./scripts/update_state.sh "{{.STATE_FILE_FQ_PATH}}" + + hardware-cr: + run: once + deps: [update-state] + summary: | + Create BMC Machine object. + sources: + - "{{.STATE_FILE_FQ_PATH}}" + generates: + - "{{.OUTPUT_DIR}}/hardware-*.yaml" + cmds: + - ./scripts/generate_hardware.sh {{.STATE_FILE_FQ_PATH}} + + bmc-machine-cr: + run: once + deps: [vbmc:update-state] + summary: | + Create BMC Machine objects. + sources: + - "{{.STATE_FILE_FQ_PATH}}" + generates: + - "{{.OUTPUT_DIR}}/bmc-machine-*.yaml" + cmds: + - ./scripts/generate_bmc.sh {{.STATE_FILE_FQ_PATH}} + + bmc-secret: + run: once + deps: [update-state] + summary: | + Create BMC Machine objects. + sources: + - "{{.STATE_FILE_FQ_PATH}}" + generates: + - "{{.OUTPUT_DIR}}/bmc-secret.yaml" + cmds: + - ./scripts/generate_secret.sh {{.STATE_FILE_FQ_PATH}} + + deploy-tinkerbell-helm-chart: + run: once + deps: [kind-cluster, update-state] + summary: | + Deploy the Tinkerbell Helm chart. + vars: + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + LB_IP: + sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}} + TRUSTED_PROXIES: + sh: KUBECONFIG={{.KUBECONFIG}} kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' + STACK_CHART_VERSION: + sh: yq eval '.versions.chart' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + CHART_NAME: tink-stack + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" helm install {{.CHART_NAME}} oci://ghcr.io/tinkerbell/charts/stack --version "{{.STACK_CHART_VERSION}}" --create-namespace --namespace {{.NAMESPACE}} --wait --set "smee.trustedProxies={{.TRUSTED_PROXIES}}" --set "hegel.trustedProxies={{.TRUSTED_PROXIES}}" --set "stack.loadBalancerIP={{.LB_IP}}" --set "smee.publicIP={{.LB_IP}}" + status: + - KUBECONFIG="{{.KUBECONFIG}}" helm list -n {{.NAMESPACE}} | grep -q {{.CHART_NAME}} + + vms: + run: once + deps: [update-state, vbmc:update-state] + summary: | + Create Libvirt VMs. + vars: + TOTAL_HARDWARE: + sh: yq eval '.totalNodes' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + cmds: + - ./scripts/create_vms.sh "{{.STATE_FILE_FQ_PATH}}" + status: + - expected={{.TOTAL_HARDWARE}}; got=$(virsh --connect qemu:///system list --all --name |grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]] + + apply-bmc-secret: + run: once + deps: [kind-cluster, bmc-secret] + summary: | + Apply the BMC secret. + vars: + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/bmc-secret.yaml + status: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl get secret bmc-creds -n {{.NAMESPACE}} + + apply-bmc-machines: + run: once + deps: [kind-cluster, bmc-machine-cr] + summary: | + Apply the BMC machines. + vars: + NAMES: + sh: yq e '.vm.details[] | [key] | @csv' {{.STATE_FILE_FQ_PATH}} + TOTAL_HARDWARE: + sh: yq eval '.totalNodes' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - for: { var: NAMES } + cmd: KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/bmc-machine-{{.ITEM}}.yaml + status: + - expected={{.TOTAL_HARDWARE}}; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get machines.bmc -n {{.NAMESPACE}} | grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]] + + apply-hardware: + run: once + deps: [kind-cluster, hardware-cr] + summary: | + Apply the hardware. + vars: + NAMES: + sh: yq e '.vm.details[] | [key] | @csv' {{.STATE_FILE_FQ_PATH}} + TOTAL_HARDWARE: + sh: yq eval '.totalNodes' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + cmds: + - for: { var: NAMES } + cmd: KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/hardware-{{.ITEM}}.yaml + status: + - expected={{.TOTAL_HARDWARE}}; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get hardware -n {{.NAMESPACE}} | grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]] + + create-workload-cluster: + run: once + deps: [kind-cluster, capi:ordered] + summary: | + Create the workload cluster by applying the generated manifest file. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + cmds: + - until KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml >>{{.OUTPUT_DIR}}/error.log 2>&1; do echo "Trying kubectl apply again..."; sleep 3; done + - echo "Workload manifest applied to cluster." + status: + - KUBECONFIG="{{.KUBECONFIG}}" kubectl get -n {{.NAMESPACE}} cluster {{.CLUSTER_NAME}} + + get-workload-cluster-kubeconfig: + run: once + deps: [create-workload-cluster] + summary: | + Get the workload cluster's kubeconfig. + vars: + KUBECONFIG: + sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}} + NAMESPACE: + sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}} + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + cmds: + - until KUBECONFIG="{{.KUBECONFIG}}" clusterctl get kubeconfig -n {{.NAMESPACE}} {{.CLUSTER_NAME}} >>{{.OUTPUT_DIR}}/error.log 2>&1 ; do echo "Waiting for workload cluster kubeconfig to be available..."; sleep 4; done + - KUBECONFIG="{{.KUBECONFIG}}" clusterctl get kubeconfig -n {{.NAMESPACE}} {{.CLUSTER_NAME}} > {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig + - echo "Workload cluster kubeconfig saved to {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig." + status: + - echo ; [ -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig ] diff --git a/capt/tasks/Taskfile-delete.yaml b/capt/tasks/Taskfile-delete.yaml new file mode 100644 index 0000000..4af7fb5 --- /dev/null +++ b/capt/tasks/Taskfile-delete.yaml @@ -0,0 +1,57 @@ +version: "3" + +tasks: + playground: + summary: | + Delete the CAPT playground. + cmds: + - task: kind-cluster + - task: vbmc-container + - task: vms + - task: output-dir + + kind-cluster: + summary: | + Delete the KinD cluster. + vars: + CLUSTER_NAME: + sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}} + cmds: + - kind delete cluster --name {{.CLUSTER_NAME}} + status: + - got=$(kind get clusters | grep -c {{.CLUSTER_NAME}} || :); [[ "$got" == "0" ]] + + vms: + summary: | + Delete the VMs. + vars: + VM_NAMES: + sh: yq e '.vm.details[] | [key] | @csv' {{.STATE_FILE_FQ_PATH}} + VM_BASE_NAME: + sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}} + cmds: + - for: { var: VM_NAMES } + cmd: (virsh --connect qemu:///system destroy {{.ITEM}} || true) ## if the VM is already off, this will fail + - for: { var: VM_NAMES } + cmd: virsh --connect qemu:///system undefine --nvram --remove-all-storage {{.ITEM}} + status: + - got=$(virsh --connect qemu:///system list --all --name | grep -ce "{{.VM_BASE_NAME}}*" || :); [[ "$got" == "0" ]] + + vbmc-container: + summary: | + Delete the Virtual BMC container. + vars: + VBMC_CONTAINER_NAME: + sh: yq eval '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + cmds: + - docker rm -f {{.VBMC_CONTAINER_NAME}} + status: + - got=$(docker ps -a | grep -c {{.VBMC_CONTAINER_NAME}} || :); [[ "$got" == "0" ]] + + output-dir: + summary: | + Delete the output directory. + cmds: + - rm -rf {{.OUTPUT_DIR}} + status: + - echo ;[ ! -d {{.OUTPUT_DIR}} ] diff --git a/capt/tasks/Taskfile-vbmc.yaml b/capt/tasks/Taskfile-vbmc.yaml new file mode 100644 index 0000000..6c92693 --- /dev/null +++ b/capt/tasks/Taskfile-vbmc.yaml @@ -0,0 +1,42 @@ +version: "3" + +tasks: + start-server: + run: once + summary: | + Start the virtualbmc server. Requires the "kind" docker network to exist. + vars: + VBMC_CONTAINER_NAME: + sh: yq eval '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + VBMC_CONTAINER_IMAGE: + sh: yq eval '.virtualBMC.image' {{.STATE_FILE_FQ_PATH}} + cmds: + - docker run -d --privileged --rm --network kind -v /var/run/libvirt/libvirt-sock-ro:/var/run/libvirt/libvirt-sock-ro -v /var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock --name {{.VBMC_CONTAINER_NAME}} {{.VBMC_CONTAINER_IMAGE}} + status: + - docker ps | grep -q {{.VBMC_CONTAINER_NAME}} + + start-vbmcs: + run: once + deps: [start-server] + summary: | + Register and start the virtualbmc servers. Requires that the virtual machines exist. + vars: + VBMC_NAME: + sh: yq e '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + cmds: + - ./scripts/virtualbmc.sh {{.STATE_FILE_FQ_PATH}} + status: + - expected=$(yq e '.totalNodes' {{.STATE_FILE_FQ_PATH}}); got=$(docker exec {{.VBMC_NAME}} vbmc list | grep -c "running" || :); [[ "$got" == "$expected" ]] + + update-state: + run: once + deps: [start-server] + summary: | + Update the state file with the virtual bmc server information. + vars: + VBMC_CONTAINER_NAME: + sh: yq eval '.virtualBMC.containerName' {{.STATE_FILE_FQ_PATH}} + cmds: + - vbmc_ip=$(docker inspect -f '{{`{{ .NetworkSettings.Networks.kind.IPAddress }}`}}' {{.VBMC_CONTAINER_NAME}}); yq e -i '.virtualBMC.ip = "'$vbmc_ip'"' {{.STATE_FILE_FQ_PATH}} + status: + - vbmc_ip=$(docker inspect -f '{{`{{ .NetworkSettings.Networks.kind.IPAddress }}`}}' {{.VBMC_CONTAINER_NAME}}); [[ "$(yq eval '.virtualBMC.ip' {{.STATE_FILE_FQ_PATH}})" == "$vbmc_ip" ]] diff --git a/capt/templates/bmc-machine.tmpl b/capt/templates/bmc-machine.tmpl new file mode 100644 index 0000000..11d8ee2 --- /dev/null +++ b/capt/templates/bmc-machine.tmpl @@ -0,0 +1,16 @@ +apiVersion: bmc.tinkerbell.org/v1alpha1 +kind: Machine +metadata: + name: $NODE_NAME + namespace: $NAMESPACE +spec: + connection: + authSecretRef: + name: bmc-creds + namespace: $NAMESPACE + host: $BMC_IP + insecureTLS: true + port: $BMC_PORT + providerOptions: + ipmitool: + port: $BMC_PORT \ No newline at end of file diff --git a/capt/templates/bmc-secret.tmpl b/capt/templates/bmc-secret.tmpl new file mode 100644 index 0000000..35fa3e9 --- /dev/null +++ b/capt/templates/bmc-secret.tmpl @@ -0,0 +1,9 @@ +apiVersion: v1 +data: + password: $BMC_PASS_BASE64 + username: $BMC_USER_BASE64 +kind: Secret +metadata: + name: bmc-creds + namespace: $NAMESPACE +type: kubernetes.io/basic-auth \ No newline at end of file diff --git a/capt/templates/clusterctl.tmpl b/capt/templates/clusterctl.tmpl new file mode 100644 index 0000000..606bbf6 --- /dev/null +++ b/capt/templates/clusterctl.tmpl @@ -0,0 +1,7 @@ +providers: + - name: "tinkerbell" + url: "https://github.com/tinkerbell/cluster-api-provider-tinkerbell/releases/v$CAPT_VERSION/infrastructure-components.yaml" + type: "InfrastructureProvider" +images: + infrastructure-tinkerbell: + tag: v$CAPT_VERSION \ No newline at end of file diff --git a/capt/templates/hardware.tmpl b/capt/templates/hardware.tmpl new file mode 100644 index 0000000..bdfdd84 --- /dev/null +++ b/capt/templates/hardware.tmpl @@ -0,0 +1,34 @@ +apiVersion: tinkerbell.org/v1alpha1 +kind: Hardware +metadata: + labels: + tinkerbell.org/role: $NODE_ROLE + name: $NODE_NAME + namespace: $NAMESPACE +spec: + bmcRef: + apiGroup: bmc.tinkerbell.org + kind: Machine + name: $NODE_NAME + disks: + - device: /dev/vda + interfaces: + - dhcp: + arch: x86_64 + hostname: $NODE_NAME + ip: + address: $NODE_IP + gateway: $GATEWAY_IP + netmask: 255.255.0.0 + lease_time: 4294967294 + mac: $NODE_MAC + name_servers: + - 8.8.8.8 + - 1.1.1.1 + netboot: + allowPXE: true + allowWorkflow: true + metadata: + instance: + hostname: $NODE_NAME + id: $NODE_MAC \ No newline at end of file diff --git a/capt/templates/kustomization.tmpl b/capt/templates/kustomization.tmpl new file mode 100644 index 0000000..0931107 --- /dev/null +++ b/capt/templates/kustomization.tmpl @@ -0,0 +1,218 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +namespace: $NAMESPACE +resources: + - prekustomization.yaml +patches: + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellMachineTemplate + name: ".*control-plane.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec + value: + hardwareAffinity: + required: + - labelSelector: + matchLabels: + tinkerbell.org/role: control-plane + templateOverride: | + version: "0.1" + name: playground-template + global_timeout: 6000 + tasks: + - name: "playground-template" + worker: "{{.device_1}}" + volumes: + - /dev:/dev + - /dev/console:/dev/console + - /lib/firmware:/lib/firmware:ro + actions: + - name: "stream-image" + image: quay.io/tinkerbell-actions/oci2disk:v1.0.0 + timeout: 600 + environment: + IMG_URL: $OS_REGISTRY/$OS_DISTRO-$OS_VERSION:$KUBE_VERSION.gz + DEST_DISK: {{ index .Hardware.Disks 0 }} + COMPRESSED: true + - name: "add-tink-cloud-init-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: + Ec2: + metadata_urls: ["http://$TINKERBELL_VIP:50061"] + strict_id: false + system_info: + default_user: + name: tink + groups: [wheel, adm] + sudo: ["ALL=(ALL) NOPASSWD:ALL"] + shell: /bin/bash + manage_etc_hosts: localhost + warnings: + dsid_missing_source: off + - name: "add-tink-cloud-init-ds-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/ds-identify.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: Ec2 + - name: "kexec-image" + image: ghcr.io/jacobweinstock/waitdaemon:0.2.0 + timeout: 90 + pid: host + environment: + BLOCK_DEVICE: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + IMAGE: quay.io/tinkerbell-actions/kexec:v1.0.0 + WAIT_SECONDS: 10 + volumes: + - /var/run/docker.sock:/var/run/docker.sock + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellMachineTemplate + name: ".*worker.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec + value: + hardwareAffinity: + required: + - labelSelector: + matchLabels: + tinkerbell.org/role: worker + templateOverride: | + version: "0.1" + name: playground-template + global_timeout: 6000 + tasks: + - name: "playground-template" + worker: "{{.device_1}}" + volumes: + - /dev:/dev + - /dev/console:/dev/console + - /lib/firmware:/lib/firmware:ro + actions: + - name: "stream-image" + image: quay.io/tinkerbell-actions/oci2disk:v1.0.0 + timeout: 600 + environment: + IMG_URL: $OS_REGISTRY/$OS_DISTRO-$OS_VERSION:$KUBE_VERSION.gz + DEST_DISK: {{ index .Hardware.Disks 0 }} + COMPRESSED: true + - name: "add-tink-cloud-init-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: + Ec2: + metadata_urls: ["http://$TINKERBELL_VIP:50061"] + strict_id: false + system_info: + default_user: + name: tink + groups: [wheel, adm] + sudo: ["ALL=(ALL) NOPASSWD:ALL"] + shell: /bin/bash + manage_etc_hosts: localhost + warnings: + dsid_missing_source: off + - name: "add-tink-cloud-init-ds-config" + image: quay.io/tinkerbell-actions/writefile:v1.0.0 + timeout: 90 + environment: + DEST_DISK: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + DEST_PATH: /etc/cloud/ds-identify.cfg + UID: 0 + GID: 0 + MODE: 0600 + DIRMODE: 0700 + CONTENTS: | + datasource: Ec2 + - name: "kexec-image" + image: ghcr.io/jacobweinstock/waitdaemon:0.2.0 + timeout: 90 + pid: host + environment: + BLOCK_DEVICE: {{ formatPartition ( index .Hardware.Disks 0 ) 1 }} + FS_TYPE: ext4 + IMAGE: quay.io/tinkerbell-actions/kexec:v1.0.0 + WAIT_SECONDS: 10 + volumes: + - /var/run/docker.sock:/var/run/docker.sock + - target: + group: infrastructure.cluster.x-k8s.io + kind: TinkerbellCluster + name: ".*" + version: v1beta1 + patch: |- + - op: add + path: /spec + value: + imageLookupBaseRegistry: "$OS_REGISTRY" + imageLookupOSDistro: "$OS_DISTRO" + imageLookupOSVersion: "$VERSIONS_OS" + - target: + group: bootstrap.cluster.x-k8s.io + kind: KubeadmConfigTemplate + name: "$CLUSTER_NAME-.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/template/spec/users + value: + - name: tink + sudo: ALL=(ALL) NOPASSWD:ALL + sshAuthorizedKeys: + - $SSH_AUTH_KEY + - target: + group: controlplane.cluster.x-k8s.io + kind: KubeadmControlPlane + name: "$CLUSTER_NAME-.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/kubeadmConfigSpec/users + value: + - name: tink + sudo: ALL=(ALL) NOPASSWD:ALL + sshAuthorizedKeys: + - $SSH_AUTH_KEY + - target: + group: controlplane.cluster.x-k8s.io + kind: KubeadmControlPlane + name: "$CLUSTER_NAME-.*" + version: v1beta1 + patch: |- + - op: add + path: /spec/kubeadmConfigSpec/preKubeadmCommands + value: + - mkdir -p /etc/kubernetes/manifests && ctr images pull ghcr.io/kube-vip/kube-vip:v$KUBEVIP_VERSION && ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:v$KUBEVIP_VERSION vip /kube-vip manifest pod --arp --interface $(ip -4 -j route list default | jq -r .[0].dev) --address $CONTROL_PLANE_VIP --controlplane --leaderElection --k8sConfigPath $CONF_PATH > /etc/kubernetes/manifests/kube-vip.yaml diff --git a/stack/README.md b/stack/README.md new file mode 100644 index 0000000..870b694 --- /dev/null +++ b/stack/README.md @@ -0,0 +1,40 @@ +## Tinkerbell Stack Playground + +The following section container the Tinkerbell stack playground instructions. It is not a production reference architecture. +Please use the [Helm chart](https://github.com/tinkerbell/charts) for production deployments. + +## Quick-Starts + +The following quick-start guides will walk you through standing up the Tinkerbell stack. +There are a few options for this. +Pick the one that works best for you. + +## Options + +- [Vagrant and VirtualBox](docs/quickstarts/VAGRANTVBOX.md) +- [Vagrant and Libvirt](docs/quickstarts/VAGRANTLVIRT.md) +- [Kubernetes](docs/quickstarts/KUBERNETES.md) + +## Next Steps + +By default the Vagrant quickstart guides automatically install Ubuntu on the VM (machine1). You can provide your own OS template. To do this: + +1. Login to the stack VM + + ```bash + vagrant ssh stack + ``` + +1. Add your template. An example Template object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/template.yaml) and more Template documentation can be found [here](https://tinkerbell.org/docs/concepts/templates/). + + ```bash + kubectl apply -f my-OS-template.yaml + ``` + +1. Create the workflow. An example Workflow object can be found [here](https://github.com/tinkerbell/tink/tree/main/config/crd/examples/workflow.yaml). + + ```bash + kubectl apply -f my-custom-workflow.yaml + ``` + +1. Restart the machine to provision (if using the vagrant playground test machine this is done by running `vagrant destroy -f machine1 && vagrant up machine1`) diff --git a/docs/quickstarts/KUBERNETES.md b/stack/docs/quickstarts/KUBERNETES.md similarity index 100% rename from docs/quickstarts/KUBERNETES.md rename to stack/docs/quickstarts/KUBERNETES.md diff --git a/docs/quickstarts/VAGRANTLVIRT.md b/stack/docs/quickstarts/VAGRANTLVIRT.md similarity index 99% rename from docs/quickstarts/VAGRANTLVIRT.md rename to stack/docs/quickstarts/VAGRANTLVIRT.md index eb78a43..54e710d 100644 --- a/docs/quickstarts/VAGRANTLVIRT.md +++ b/stack/docs/quickstarts/VAGRANTLVIRT.md @@ -22,7 +22,7 @@ This option will also create a VM and provision an OS onto it. 1. Start the stack ```bash - cd vagrant + cd stack/vagrant vagrant up # This process will take about 5-10 minutes depending on your internet connection. # Hook is about 400MB in size and the Ubuntu jammy image is about 500MB diff --git a/docs/quickstarts/VAGRANTVBOX.md b/stack/docs/quickstarts/VAGRANTVBOX.md similarity index 99% rename from docs/quickstarts/VAGRANTVBOX.md rename to stack/docs/quickstarts/VAGRANTVBOX.md index b9b311a..14c5484 100644 --- a/docs/quickstarts/VAGRANTVBOX.md +++ b/stack/docs/quickstarts/VAGRANTVBOX.md @@ -21,7 +21,7 @@ This option will also create a VM and provision an OS onto it. 1. Start the stack ```bash - cd vagrant + cd stack/vagrant vagrant up # This process will take up to 10 minutes depending on your internet connection. # It will download HookOS, which is a couple hundred megabytes in size, and an Ubuntu cloud image, which is about 600MB. diff --git a/vagrant/.env b/stack/vagrant/.env similarity index 100% rename from vagrant/.env rename to stack/vagrant/.env diff --git a/vagrant/Vagrantfile b/stack/vagrant/Vagrantfile similarity index 100% rename from vagrant/Vagrantfile rename to stack/vagrant/Vagrantfile diff --git a/vagrant/hardware.yaml b/stack/vagrant/hardware.yaml similarity index 100% rename from vagrant/hardware.yaml rename to stack/vagrant/hardware.yaml diff --git a/vagrant/setup.sh b/stack/vagrant/setup.sh similarity index 100% rename from vagrant/setup.sh rename to stack/vagrant/setup.sh diff --git a/vagrant/template.yaml b/stack/vagrant/template.yaml similarity index 100% rename from vagrant/template.yaml rename to stack/vagrant/template.yaml diff --git a/vagrant/ubuntu-download.yaml b/stack/vagrant/ubuntu-download.yaml similarity index 100% rename from vagrant/ubuntu-download.yaml rename to stack/vagrant/ubuntu-download.yaml diff --git a/vagrant/workflow.yaml b/stack/vagrant/workflow.yaml similarity index 100% rename from vagrant/workflow.yaml rename to stack/vagrant/workflow.yaml