Skip to content
This repository has been archived by the owner on Jul 23, 2019. It is now read-only.

Implement "destroy cluster" support #74

Closed
russellb opened this issue May 7, 2019 · 9 comments
Closed

Implement "destroy cluster" support #74

russellb opened this issue May 7, 2019 · 9 comments

Comments

@russellb
Copy link
Member

russellb commented May 7, 2019

kni-install does not yet support "destroy cluster" for baremetal clusters.

See pkg/destroy/baremetal/baremetal.go for the stub, and other implementations under pkg/destroy/ for examples of implementations on other platforms.

@russellb
Copy link
Member Author

russellb commented May 7, 2019

Having the baremetal-operator drive Ironic to destroy itself is not ideal, as we can't ensure that the cluster is actually fully destroyed. In particular, we can't drive all of the nodes through cleaning.

One way to do this would be the reverse of how Ironic moves in the cluster deployment process. We can copy all of the host information out of the cluster, shut down the baremetal-operator, and then re-launch Ironic on the provisioning host. The installer could then drive the local Ironic to ensure all hosts are deprovisioned.

@hardys
Copy link

hardys commented May 9, 2019

This is an interesting one, I'd assumed we'd run ironic on the bootstrap VM in the deploy case (where there's no external ironic, e.g on the provisioning host), but since there's no bootstrap VM on destroy that approach won't work, so I wonder if we should just run the ironic pod on the host via kni-installer in both cases?

@hardys
Copy link

hardys commented May 10, 2019

This is actually quite tricky to implement in the same way as other platforms, because they all rely on tagging resources, then discovering all the tagged resources and deleting them. But this won't work unless we have a single long-lived ironic to maintain the state/tags.

I think we'll have to either scale down the worker machineset, kill the BMO (and hosted Ironic), then spin up another ironic to delete the masters (using details gathered from the externally provisioned BareMetalHost objects), or just grab all the BareMetalHost details, kill the BMO/Ironic, then use another/local Ironic to tear them all down.

@russellb
Copy link
Member Author

I think we'll have to either scale down the worker machineset, kill the BMO (and hosted Ironic), then spin up another ironic to delete the masters (using details gathered from the externally provisioned BareMetalHost objects), or just grab all the BareMetalHost details, kill the BMO/Ironic, then use another/local Ironic to tear them all down.

I agree with this.

@hardys
Copy link

hardys commented May 10, 2019

Ok so I think we should solve this by first fixing #68 so we can optionally launch Ironic on the bootstrap VM via an injected manifest provided by ignition, then on destroy launch a similar VM with the same configuration (but without the bootstrap configuration).

This should mean some reuse, since we'll use the exact same pattern/config to deploy the masters and do deprovisioning on destroy, but also avoids potential complexity of running the Ironic container on the host directly (where we may want to support multiple OS options, and may not want to require host access e.g to modify firewall rules etc).

If that sounds reasonable, I'll take a look at enabling the bootstrap VM to run ironic, ideally using same/similar configuration that we enable for worker deployment in metal3-io/baremetal-operator#72

@dhellmann
Copy link
Member

How much cleaning is really involved? Could we just launch a DaemonSet to trigger wiping the partition table and then reboot the host?

@russellb
Copy link
Member Author

How much cleaning is really involved? Could we just launch a DaemonSet to trigger wiping the partition table and then reboot the host?

That would be simpler for sure, but the downside is the lack of any out-of-band components to verify that the cluster really has been destroyed and the process is complete.

@hardys
Copy link

hardys commented May 13, 2019

How much cleaning is really involved? Could we just launch a DaemonSet to trigger wiping the partition table and then reboot the host?

This may be something to discuss with product management downstream I guess, but FWIW we've already seen issues redeploying ceph on boxes where the disks aren't cleaned of metadata from previous deployments, and I was assuming there would be security/compliance reasons to prefer cleaning all the cluster data from the disks.

I also assumed we'd want all the nodes (including the masters) powered down after the destroy operation, which is probably most easily achieved using Ironic, at which point enabling cleaning on deprovision becomes easier to enable.

@russellb
Copy link
Member Author

replaced by openshift/installer#2005

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants