-
Notifications
You must be signed in to change notification settings - Fork 15
Implement "destroy cluster" support #74
Comments
Having the baremetal-operator drive Ironic to destroy itself is not ideal, as we can't ensure that the cluster is actually fully destroyed. In particular, we can't drive all of the nodes through cleaning. One way to do this would be the reverse of how Ironic moves in the cluster deployment process. We can copy all of the host information out of the cluster, shut down the baremetal-operator, and then re-launch Ironic on the provisioning host. The installer could then drive the local Ironic to ensure all hosts are deprovisioned. |
This is an interesting one, I'd assumed we'd run ironic on the bootstrap VM in the deploy case (where there's no external ironic, e.g on the provisioning host), but since there's no bootstrap VM on destroy that approach won't work, so I wonder if we should just run the ironic pod on the host via kni-installer in both cases? |
This is actually quite tricky to implement in the same way as other platforms, because they all rely on tagging resources, then discovering all the tagged resources and deleting them. But this won't work unless we have a single long-lived ironic to maintain the state/tags. I think we'll have to either scale down the worker machineset, kill the BMO (and hosted Ironic), then spin up another ironic to delete the masters (using details gathered from the externally provisioned BareMetalHost objects), or just grab all the BareMetalHost details, kill the BMO/Ironic, then use another/local Ironic to tear them all down. |
I agree with this. |
Ok so I think we should solve this by first fixing #68 so we can optionally launch Ironic on the bootstrap VM via an injected manifest provided by ignition, then on destroy launch a similar VM with the same configuration (but without the bootstrap configuration). This should mean some reuse, since we'll use the exact same pattern/config to deploy the masters and do deprovisioning on destroy, but also avoids potential complexity of running the Ironic container on the host directly (where we may want to support multiple OS options, and may not want to require host access e.g to modify firewall rules etc). If that sounds reasonable, I'll take a look at enabling the bootstrap VM to run ironic, ideally using same/similar configuration that we enable for worker deployment in metal3-io/baremetal-operator#72 |
How much cleaning is really involved? Could we just launch a DaemonSet to trigger wiping the partition table and then reboot the host? |
That would be simpler for sure, but the downside is the lack of any out-of-band components to verify that the cluster really has been destroyed and the process is complete. |
This may be something to discuss with product management downstream I guess, but FWIW we've already seen issues redeploying ceph on boxes where the disks aren't cleaned of metadata from previous deployments, and I was assuming there would be security/compliance reasons to prefer cleaning all the cluster data from the disks. I also assumed we'd want all the nodes (including the masters) powered down after the destroy operation, which is probably most easily achieved using Ironic, at which point enabling cleaning on deprovision becomes easier to enable. |
replaced by openshift/installer#2005 |
kni-install does not yet support "destroy cluster" for baremetal clusters.
See pkg/destroy/baremetal/baremetal.go for the stub, and other implementations under pkg/destroy/ for examples of implementations on other platforms.
The text was updated successfully, but these errors were encountered: