-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uninstall on missing install directory #746
Comments
If you clone and build hiveutil here: https://github.com/openshift/hive (see the bottom) you can then scrub the AWS resources by tags, which is the same code the installer uses if you still have your metadata.json. It would be nice for openshift-install to expose this functionality in the event you've lost your metadata though. |
Exactly. cleaning up can be as user friendly as the installation IMO. |
Pushing the metadata into the cluster would address the "I've lost my |
Would just leave the "my cluster is also broken" use case, in which case ideally it would be awesome to have something like openshift-install destroy cluster --platform=aws --uuid=clusteruuid |
If we're addressing that, why bother with pushing
You also need to know the region (although we can assume the user has that configured in openshift-install destroy cluster --metadata='{"platform": "libvirt", "clusterName": "wking", "uri": "qemu+tcp://192.168.122.1/system"}' and: openshift-install destroy cluster --metadata='{"platform": "aws", "region": "us-east-1", "clusterID": "fb038bc9-b005-4fc8-996e-0d4968595937"}' You can already get pretty close to that with: echo '{"clusterName": "wking", "aws": {"region": "us-east-1", "identifier": [{"tectonicClusterID": "fb038bc9-b005-4fc8-996e-0d4968595937"}, {"kubernetes.io/cluster/wking": "owned"}]}}' >metadata.json
openshift-install destroy cluster we'd just need to add the option and simplify the |
Indeed that is close, it's just not great UX. I'm sure we can live with it internally, but it's not the best foot forward to show a customer when they inevitably will look to do this. |
I'm open to UX improvements, but aside from the platform string, the remaining information needed is fundamentally different for each platform. Did you want per-platform subcommands with positional arguments?
(pulling the region from the usual places if unspecified),
(pulling the URI from |
Some more question, if user even lost CLUSTER_ID, how to move one the destroy? Search it from aws instance tag? Is there any other way to get the cluster id, such as: oc command? Today I hit another UX improvements issue.
|
Seem like I could run "oc get machineset -n openshift-cluster-api -o yaml" to get |
@jianlinliu, the correct way to get the cluster ID is from the |
And for multiple clusters in one account with the same name, that's an open issue as well: #762. |
Reading this and related issues, I'm thinking that the best approach would be for installer to have a discover mode where it detects signs of clusters in a particular account/region based on the tags it sets during provisioning. In this way, when we end up with a cloud account full of old cluster pieces, the installer can be used to discover those and clean them up properly. |
Documented: https://access.redhat.com/solutions/3826921 |
The above article is the recommended procedure for recovering the cluster metadata. /close |
@crawford: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen Manually messing with aws resources is not a solution, just a workaround. I didn't have to mess with AWS resource to create a cluster, I shouldn't do it for destroy. Installer create asks for ~4 inputs when creating the cluster, asking for those again / listing existing clusters and being given a choice to select one to delete would be the appropriate counterpart. |
@tnozicka: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Closing this issue. I have updated the kbase article with another way to get the clusterID without going to AWS if the cluster is still running. If the cluster is not still running getting the clusterID from AWS is the only option as the uuid is generated at install time. |
Available cluster names and cluster IDs can be discovered by installer. There is no reason to ask user to find manually. Ideally there should be a mode where any cluster resources are removed by name (without cluster id). For test clusters this is the most useful approach to avoid stale resources that can break a new install. |
@akostadinov how, how does the installer do this? |
It does not presently. I tried to say that instead of asking user to discover cluster names and IDs, it would be more user friendly to make installer able to discover those. |
When user installs a cluster but deletes the directory created by installer, there is no easy way to remove the cluster.
I think that all necessary metadata should already exist inside cluster so it should be made possible for user to uninstall cluster only by pointing installer at the target cluster.
Version
7e7c26f
The text was updated successfully, but these errors were encountered: