-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lost metadata.json and kubeconfig #1583
Comments
Nevermind managed to generate a metadata.json file by running ./openshift-install create install-configs then running ./openshift-install create ignition-configs. From there modified the file with the cluster id, and infra id from the console and it worked for me. |
Closing as a dup of #746. But yeah, "figure out the missing IDs somehow and plug them in to a /close |
@wking: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I have been testing out Openshift 4 via the preview, and have a cluster installed in AWS. I installed it from an AWS instance, and was using that to manage it. I have the kubeadmin login, and can login to the console.
My issue arises due to the fact that the AWS instance I used to install and manage it, was terminated. As such I've lost the kubeconfig, and the metadata details to destroy the cluster.
There seems to be some ways of getting around the ability to delete the cluster, but the details require a subscription. https://access.redhat.com/solutions/3826921
Anyone able to give me some advice? Particularly for re-generating the kubeconfig somehow.
The text was updated successfully, but these errors were encountered: