-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more details about host port ownership #618
Conversation
Can we really claim that masters are entirely off-limits? It's plausible that a customer would like to deploy a cluster-wide auditing service, say. |
The PR description says "after 4.8, we will not claim any new ports (on workers) outside the 9000-9999 and 29000-29999." We've already claimed the ports in the port registry. We also only run on the control plane. I wouldn't be opposed to moving things, though, but probably out of scope for this enhancement. It might simplify the firewall rules since the installer does need to talk to Ironic on the bootstrap, for example. |
on a node where there would be a port conflict. | ||
Other than the reserved ranges and the other ports listed below, host | ||
ports on worker nodes are available for use by customers. (But all | ||
host ports on masters are reserved for OCP use.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to address what happens on single-node or compact clusters where control plane and workers are the same host?
76c7b98
to
6a5117f
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
But would they necessarily need to claim a host port to do that?
ok, so I clarified that:
So in other words, we will not actually go out of our way to prevent people from doing this, it's just that if they do it, and they pick a port that we also wanted to use in the next release, then trying to upgrade would fail. We could make masters work the same way as workers, but that puts more limits on us in the future. (Of the current reserved host ports, more than half are masters-only, and presumably this will continue in the future.) Also note that saying "customers cannot safely use host ports on masters" is an improvement from the current situation, which is that customers cannot safely use host ports on any node. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
6a5117f
to
fd27f95
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
fd27f95
to
095f997
Compare
@danwinship this looks like something we still want. Who should be included in the review? /remove-lifecycle stale |
@danwinship @dcbw for 4789 I see a note mentioning |
Ah, yes, we do sometimes use VXLAN with OVN Kubernetes, but it's not vSphere-related, it's Windows-containers-related. |
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
095f997
to
c993aec
Compare
Stale enhancement proposals rot after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle rotten |
/remove-lifecycle rotten |
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
Stale enhancement proposals rot after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle rotten |
Rotten enhancement proposals close after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Reopen the proposal by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This clarifies our usage of host network ports on nodes.
As currently written, this imposes the restriction on OCP that after 4.8, we will not claim any new ports (on workers) outside the 9000-9999 and 29000-29999 ranges unless we also provide a configuration option to move the new service to a different port to avoid conflicts with customer pods. This seems to be the only plausible way to avoid port conflicts with customer pods on upgrade. (Well, the other possibility is that we say customers are restricted to a specific range rather than that we are restricted to a certain range. But given that using ports outside the reserved range is complicated for us anyway since it means opening up more firewall ports, it seems to make more sense to just say that we'll only use the reserved ranges.)
(If we (network team) agree on this then we will need to get buy-in from other teams too before moving forward.)
Assuming we agree on the "rules" set forth here, we should add e2e tests to enforce OCP compliance, and prometheus alerts to enforce customer compliance.
@dcbw @russellb @knobunc @trozet @abhat