-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a command for wiping a cluster #131
Comments
I'd suggest that this be implemented in two separate PRs... First to clean the cluster without the |
Hi, I was doing some tests and reading how I can work on this here is my approach
In order to cover the items above, these actions will be made in the code
cephClusters, err := clientsets.Rook.CephV1().CephClusters(cephClusterNamespace).List(ctx, v1.ListOptions{})
for _, cc := range cephClusters.Items {
err := clientsets.Rook.CephV1().CephClusters(cephClusterNamespace).Delete(ctx, cc.Name, v1.DeleteOptions{})
fmt.Printf("resource clusterName:%s, kind:%s with, finalizers:%+v has been deleted\n", cc.Name, cc.Kind, cc.Finalizers)
// check that any resource still alive if yes remove finalizers
}
Optional actions: We should to consider add a verbose flags like Let me know, what do you think? |
We need
The pattern we may need is:
The plugin has helpers for |
The admin may be finished with a rook cluster and want to wipe everything in the cluster. There is no concern for data loss, the admin just wants to destroy the cluster. Consider:
Potential command:
kubectl rook-ceph destroy-cluster --yes-really-destroy-cluster --wipe-host-path --sanitize-disks
--wipe-host-path
: This option will enable the CephCluster cleanupPolicy that will clean thedataDirHostPath
--sanitize-disks
: This option will enable the CephCluster cleanupPolicy with the default settings for wiping disks (quick
withiterations: 1
)Actions:
Documentation will likely be sufficient for these cleanup items:
The text was updated successfully, but these errors were encountered: