-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove public IPs from masters #1045
Remove public IPs from masters #1045
Conversation
Don't we still need these until we get a better fix than openshift/origin#21700 for CI? |
I'm hacking on a simple tool for dev and CI (but not for customers) to get an ssh pod running which can act as an easy to deploy bastion. Maybe I'll be able to show that PoC tomorrow. I think that should be enough to merge this... |
Linking the bastion config support so I can find it later: openshift/release#2469 ;). |
/cc @blrm |
f107702
to
c00463a
Compare
c00463a
to
8052f24
Compare
@wking on a slightly different topic, shouldn't the installer deploy a bastion too for operators to be able to ssh into masters/ workers ? While is all nice the work has been already done for CI, for prod deployment we leave the operator in the dark ? ... my 0.02$ though |
I'm in favor of pushing people away from jumping to SSH as their go-to. Providing a bastion by default goes against that goal. Creating a bastion host should be trivial for administrators and if it isn't, there are plenty of strategies that leverage SSH pods (like @eparis mentioned). |
8052f24
to
45d32fe
Compare
https://github.com/eparis/ssh-bastion provides an easy to deploy bastion pod. |
now to watch what is still using ssh in CI. hopefully most things have moved to |
Holding while I wait for a little more buy-in. |
I'm sold :). /hold cancel
/retest |
/hold @wking not from this team specifically ;) I'll remove the hold once I'm satisfied with the responses I get (offline). |
Does this exist yet? Being able to have an |
FWIW I sometimes |
I understand why this change is a good idea. I think some help in the description breaking down the debugging scenarios would help everyone.
|
If we get far enough for bootstrap teardown, we were at least beyond this point at some point. So use kubelet pods on a master, or, lacking that, fall back to the bootstrap node. |
Now that openshift/origin#21997 has merged, I think cases 1, 2, and 4 are reasonably covered. /hold cancel |
/retest |
/retest |
e2e-aws had a number of flakes and then:
/retest |
/retest |
In 6c10827 (Removing unused/deprecated security groups and ports, 2019-02-23, openshift#1306), we restricted master SSH access to the cluster, catching up with 6add0ab (data/aws: move the masters to the private subnets, 2019-01-10, openshift#1045). But the bootstrap node is a useful SSH bastion for debugging hung installs (until we get far enough along to tear down the bootstrap resources). This commit restores global SSH access to the bootstrap node, now that it is no longer provided by the master security group.
These are from b7cc916 (Set KUBE_SSH_USER for new installer for AWS tests, openshift#2274) and 43dde9e (Set KUBE_SSH_BASTION and KUBE_SSH_KEY_PATH in installer tests, 2018-12-23, openshift#2469). But moving forward, reliable SSH access direct to nodes will be hard, with things like openshift/installer@6add0ab447 (Remove public IPs from masters, 2019-01-10, openshift/installer#1045) making a SSH bastion a requirement for that sort of thing (at least on AWS). Going forward, ideally e2e tests can be ported to use privileged pods within the cluster to check what they need to check. But however that works out, stop carrying local dead code that is not affecting test results. We can always drag it back out of version control later if it turns out we actually want to go down the KUBE_SSH_* route.
This renames our subnets from master/worker to be public/private. It then moves the masters to the 'private' subnet. It also removes the public IP from all of the masters.
It leaves the bootstrap node in the public subnet and leaves it with a public ip.
This better follows AWS best security practices by not exposing our machines on the open internet. It does mean that a customer will need to provide their own bastion server for access.
Fixes: #747