-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[installer] Allow configuration of affinity for server and proxy components #9558
[installer] Allow configuration of affinity for server and proxy components #9558
Conversation
@andrew-farries Let's limit the scope of this PR to IDE + webapp here by just using |
I think we don't lose anything by applying the change uniformly to all components. Workspace components are still picking up their default node affinity and no pod affinities with these changes. IMO there is more value in having all components configure their affinity in the same way, even if only a couple use the experimental config to do more than simple node affinity. |
Could you add some context as to why this is necessary in the first place? When it comes to anti-affinity for server, can't we just bake that right in? I.e. if this behaviour is good for us, won't it be good for everyone else? (And if it isn't, can't we add a single boolean flag then?) |
This won't work, as anti-affinity is provider specific, and adds complexity where it is not needed.
I think we understand this requirement. But on the other hand we have to cater for a usecase where we additional flexibility solves a real problem, e.g. scalability issues. IMO we can shortcut this if we made this webapp component specifc. Or even narrower: server, proxy @csweichel Would that make sense? |
I think it's also worth emphasising that this is under the |
I don't think that's correct. The anti-affinity we need for our components ( affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gitpod.io/workload_services
operator: In
values:
- "true"
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- server
topologyKey: gitpod.io/workload_services
weight: 100
Not questioning that we need degrees of freedom to do our job. I'm merely questioning if we need them here.
If we had to do this (and I'm not convinced that is the case yet), this would be the way to go. |
Fair point - that said, it's not just the promises we make to users w.r.t. to the config surface, but predominantly the complexity and installation variants we ourselves need to maintain and test. |
This also opens the door to support questions in discord for the DCS/self-hosted teams. |
@andrew-farries @geropl how do we continue with this one? |
The consensus here seems to be that we make this less generic; ie only allow the few components that need to configure their affinities for a SaaS deployment able to do so. |
Agreed. This removes the incidental complexity while keeping the flexibility where we'd like to keep it. @andrew-farries Let's sync on the concrete code changes. 👍 |
Given the discussion above, is this pull request still ready for review, @andrew-farries? If not (because there are ongoing discussions that need to be finished first) would you mind marking this pull request as “draft” again so that it is clear for all requested reviews whether they should review this pull request or not? Thanks. 🙏 |
f6118dc
to
97e4854
Compare
Rebased (and updated PR description) to allow only the This should address the concerns above about allowing too much variation via the installer. @csweichel are you ok with this, or would you rather make this more restrictive by further limiting the pod affinity/anti-affinty that these components are allowed to set? |
I still don't understand why anity-affinity needs to be configurable to this degree, i.e. why devolve the concrete configuration/parametrisation to the experimental config. IMHO that introduces unnecessary degrees of freedom. It would seem that I'm missing some info here, or am not seeing a concrete scenario that you're designing for. |
The podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- server
topologyKey: gitpod.io/workload_services
weight: 100 and for podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- proxy
topologyKey: gitpod.io/workload_services
weight: 100 tbh, I'm not sure how much variance we expect to have to support here; if this is all we need then we could simplify this further to a simple toggle as you say. @geropl would you be happy with that change? |
My reasoning is more that we should not overload installer with these very specific optimizations/concerns here. I also see the point of unnecessary freedom, but maybe I interpreted the
@andrew-farries Let's do that! |
Setting to true will add pod (anti)-affinity fields to the server and proxy components.
97e4854
to
8258c5f
Compare
Rebased to make the pod anti-affinity for |
Server *ServerConfig `json:"server,omitempty"` | ||
PublicAPI *PublicAPIConfig `json:"publicApi,omitempty"` | ||
Server *ServerConfig `json:"server,omitempty"` | ||
UsePodAffinity bool `json:"usePodAffinity"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Should read anti: UsePodAntiAffinity bool
json:"usePodAntiAffinity"`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Please have a look at the comment above ☝️
@andrew-farries The PR is based on another branch, please re-create against main. |
Description
One of the Webapp team's epics for Q2 is to use the Gitpod installer to deploy to Gitpod SaaS. In order to do that we will need to add additional configuration to the installer to make the output suitable for a SaaS deployment as opposed to a self-hosted deployment.
This PR adds the ability to configure the affinities (both node and inter-pod) for all components.Following discussion, we now only allow the
server
andproxy
components to configure their pod anti-affinity via a simple boolean toggle (experimental.webapp.usePodAffinity
) in the installer config.Related Issue(s)
Part of #9097
How to test
Create an installer config file containing this
experimental
section:Get a
versions.yaml
for use with the installer:Then invoke the installer as:
The rendered output will set anti-affinity for the
server
andproxy
components.All other components are unaffected and have their hard coded (node-)affinities.
Release Notes
Documentation
None.