-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installer creates instances without SSH key-pair #862
Comments
Instance<->key associations are an AWS-ism, and the cluster lays down your configured key using Ignition (which is generic) instead. And soon the installed key will be configurable post-install via the machine-config operator, see openshift/machine-config-operator#115. Did you actually have trouble SSHing in with the key you configured at install-time, or were you just surprised by not seeing the implementation you were expecting? |
I didn't face problems, but I was expecting a warning if no SSH key were found for the user. In my test, the installer deploys the nodes without any keys (without admin access), which is immutable on AWS without the operator you mentioned. |
No, you can SSH into bootstrap and masters as |
What happens if there isn't SSH keys for the user running |
@davivcgarcia SSH access isn't required in OpenShift 4, so we don't ask for keys if we don't see any locally. If you want to add keys after the fact, this is possible with the Machine Config Operator, but we don't have docs yet for that (it is in progress though). |
Until openshift/machine-config-operator#115 merges I don't believe the installer should proceed without a key. It leaves us with un-debuggable clusters. |
I'm also not sure how to do disaster recovery if the nodes have no ssh keys, even if I do have management of ssh keys via the MCO... |
That the users have chosen by not supplying a key. And while SSH-debugging is frequent at this stage of development, hopefully it becomes rare as things settle down over the next few months.
We currently set SSH keys in the pointer Ignition config. It looks like openshift/machine-config-operator#115 is about the machine-config daemon laying down keys (which is useful for updating keys without rebooting machines), but we could also teach the machine-config server to add the current key to Ignition configs. That would still get the installer out of key distribution (except for the bootstrap node), but machines would still get keys laid down by Ignition for debugging failures that happen before the machine is alive enough for a machine-confix daemon. |
I'm going to close this issue. I don't think SSH should be required and there is already work in-flight to allow the bootstrap logs to be streamed back without SSH. /close |
@crawford: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
#986 also landed recently explicitly saying that we aren't using AWS key pairs, which should help mitigate that particular expectation. |
Are there any use cases where user will not want ssh access? There will always be situations where |
Version
Platform:
AWS
What happened?
If the installer doesn't find local SSH Keys available, it does not warn the user and crate instances without any key.
What you expected to happen?
The installer should warn the user about that, and create a new key to be used on the instances.
How to reproduce it?
~/.ssh/
openshift-installer create cluster
aws ec2 describe-instances | grep -i key
The text was updated successfully, but these errors were encountered: