Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a command for wiping a cluster #131

Closed
travisn opened this issue Jun 23, 2023 · 3 comments · Fixed by #173
Closed

Add a command for wiping a cluster #131

travisn opened this issue Jun 23, 2023 · 3 comments · Fixed by #173
Assignees
Labels
enhancement New feature or request priority

Comments

@travisn
Copy link
Member

travisn commented Jun 23, 2023

The admin may be finished with a rook cluster and want to wipe everything in the cluster. There is no concern for data loss, the admin just wants to destroy the cluster. Consider:

Potential command:

  • kubectl rook-ceph destroy-cluster --yes-really-destroy-cluster --wipe-host-path --sanitize-disks
  • --wipe-host-path: This option will enable the CephCluster cleanupPolicy that will clean the dataDirHostPath
  • --sanitize-disks: This option will enable the CephCluster cleanupPolicy with the default settings for wiping disks (quick with iterations: 1)

Actions:

  1. If either of those options are passed, set the cleanup policy and wait for the cleanup to complete before removing finalizers
  2. Delete all Rook CRs
  3. Remove all finalizers on Rook CRs
  4. Delete all Rook CRDs

Documentation will likely be sufficient for these cleanup items:

  • The admin needs to remove Helm charts
  • The admin needs to remove the RBAC (common.yaml) if helm was not used
@travisn
Copy link
Member Author

travisn commented Jun 23, 2023

I'd suggest that this be implemented in two separate PRs... First to clean the cluster without the cleanupPolicy options, then later to also implement the cleanupPolicy.

@Javlopez Javlopez self-assigned this Aug 10, 2023
@Javlopez
Copy link
Contributor

Hi, I was doing some tests and reading how I can work on this here is my approach

  • Get the CRDS
  • Remove each CRDs
  • Verification of deletion if some still stuck
    • update finalizers to delete everything
  • done

In order to cover the items above, these actions will be made in the code

  • add new command destroy-cluster
    • with no arguments supported at this moment
  • iterate resource by resource to delete them
    • example:
  cephClusters, err := clientsets.Rook.CephV1().CephClusters(cephClusterNamespace).List(ctx, v1.ListOptions{}) 
for _, cc := range cephClusters.Items {
			err := clientsets.Rook.CephV1().CephClusters(cephClusterNamespace).Delete(ctx, cc.Name, v1.DeleteOptions{})
			fmt.Printf("resource clusterName:%s, kind:%s with, finalizers:%+v has been deleted\n", cc.Name, cc.Kind, cc.Finalizers)
			// check that any resource still alive  if yes remove finalizers
		}
  • review that any resource still alive
  • if yes.. Question: I should to apply a patch in order to force finalizers?
  • finish the process

Optional actions:

We should to consider add a verbose flags like -v=true or -v=1,2,3....9 for show each step that the process is doing?

Let me know, what do you think?

@travisn
Copy link
Member Author

travisn commented Aug 14, 2023

Hi, I was doing some tests and reading how I can work on this here is my approach

  • Get the CRDS

  • Remove each CRDs

  • Verification of deletion if some still stuck

    • update finalizers to delete everything
  • done

In order to cover the items above, these actions will be made in the code

  • add new command destroy-cluster

    • with no arguments supported at this moment

We need --yes-really-destroy-cluster to prevent accidental deletion

  • iterate resource by resource to delete them

    • example:
  cephClusters, err := clientsets.Rook.CephV1().CephClusters(cephClusterNamespace).List(ctx, v1.ListOptions{}) 
for _, cc := range cephClusters.Items {
			err := clientsets.Rook.CephV1().CephClusters(cephClusterNamespace).Delete(ctx, cc.Name, v1.DeleteOptions{})
			fmt.Printf("resource clusterName:%s, kind:%s with, finalizers:%+v has been deleted\n", cc.Name, cc.Kind, cc.Finalizers)
			// check that any resource still alive  if yes remove finalizers
		}
  • review that any resource still alive
  • if yes.. Question: I should to apply a patch in order to force finalizers?

The pattern we may need is:

  1. Get the CR
  2. If it has a finalizer, patch it to remove the finalizer
  3. Delete the CR
  • finish the process

Optional actions:

We should to consider add a verbose flags like -v=true or -v=1,2,3....9 for show each step that the process is doing?

Let me know, what do you think?

The plugin has helpers for logger.Info(), Warning(), Error() for different types of messages. We don't have a verbose level just because the tool should output what it is doing, and we haven't had a need for different levels of verbosity. It's ok for the plugin commands to output lots of log statements at Info level so the user knows what happened.

@obnoxxx obnoxxx added the enhancement New feature or request label Oct 2, 2023
@Javlopez Javlopez linked a pull request Oct 10, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request priority
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants