Remove disk constraints in Tinkerbell provider #3234
Labels
area/providers/tinkerbell
Tinkerbell provider related tasks and issues
kind/enhancement
New feature or request
team/providers
Milestone
#3233 is a prereq to this issue.
The Tinkerbell provider imposes a constraint on customers that requires them to use machines with the same disk type in node groups (e.g. control plane nodes or worker node group 1 etc). This was because Tinkerbell templates didn't have access to the hardware associated with a workflow at time of rendering so required pre-populating by the EKS-A CLI before hardware was selected.
The latest changes to Tinkerbell feed hardware data (disks only currently) to templates rooted at
.Hardware
. A function for rendering full disk paths with a partition, calledformatPartition
, was added and supports block (/dev/sd) and NVMe (/dev/nvme) devices.Example usage of disk partitioning function
index .Hardware.Disks 0
retrieves the first disk in the hardware disks slice retrieved from theHarwdare
Kubernetes object associated with the workflow.formatPartition <disk> 1
formats the disk path with a partition 1.The text was updated successfully, but these errors were encountered: