Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Part 8 of #758] Added singleuser affinity configuration #926

Merged
merged 3 commits into from
Sep 20, 2018

Conversation

consideRatio
Copy link
Member

@consideRatio consideRatio commented Sep 11, 2018

@consideRatio consideRatio changed the title [Part 8 of #758] [Part 8 of #758] Added singleuser. node/pod/podAnti affinity Sep 11, 2018
@consideRatio consideRatio changed the title [Part 8 of #758] Added singleuser. node/pod/podAnti affinity [Part 8 of #758] Added singleuser affinity configuration Sep 11, 2018
{{- $dummy := set . "hasNodeAffinity" (or .nodeAffinityRequired .nodeAffinityPreferred) }}
{{- $dummy := set . "hasPodAffinity" (or .podAffinityRequired .podAffinityPreferred) }}
{{- $dummy := set . "hasPodAntiAffinity" (or .podAntiAffinityRequired .podAntiAffinityPreferred) }}
{{- $dummy := set . "hasAffinity" (or .hasNodeAffinity .hasPodAffinity .hasPodAntiAffinity) }}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need a lot of helpers, some that contains the output at a level where I can augment it with other default output from affinity presets in future PRs.

@@ -143,6 +143,16 @@ auth:

singleuser:
extraTolerations: []
nodeSelector: {}
extraNodeAffinity:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of exposing six different configurables, can we expose the Kubernetes .affinity structure directly? I think that would be a much better and better-documented experience.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want the affinity to be utilized by the image puller as well, which is configured directly from a heml template. But we only want the daemonset to use the affinity partially, for example the daemonset should not care about affinity.podAffinity or but should care about affinity.nodeAffinity.requiredDuring....

Attempting to extract pieces from a big dict passed to a value in config.yaml is not plausible even though I can imagine someone have made some super complicated trick to accomplish.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then doing so would also rule out having a configurable preset of require prefer ignore matching node affinity for user / core pods with user / core pools for example.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're talking about one level of a dict affinity.nodeAffinity, aren't we? Isn't that less complex than what's happening here, where we are mapping several levels of a dict to different things?

{{- /*
input: podKind
*/}}
{{- define "jupyterhub.affinity" -}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This helper seems to be defined but unused

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you are right, this PR only contained affinities required for configuring the user-related affinities, and this helper was not needed by the daemonset which only requires a subset of the affinity helpers introduced.

preferred: []
extraPodAntiAffinity:
required: []
preferred: []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make this expose the Kubernetes affinity structure directly, instead of exposing six different configurables? It seems that would be a lot simpler, both for us to maintain and for users to understand. In general, let's try very hard to avoid using anything but Kubernetes' own names and structures for things.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I have considered this in depth now, and the conclusion is no, not with Helm 2, in the values section. If we do, we wont be able to augment our configurable presets within the templates.

But, in kubespawner, that is configured through python where we have great capabilities of working with the stuff coming from the configmap, we could combine them again and set one big affinity traitlet on kubespawner.

I was confused about this until recently. In summary, this PR must be like this, but the previous one for kubespawner did not have to be that way =/

{{- $dummy := set . "hasNodeAffinity" (or .nodeAffinityRequired .nodeAffinityPreferred) }}
{{- $dummy := set . "hasPodAffinity" (or .podAffinityRequired .podAffinityPreferred) }}
{{- $dummy := set . "hasPodAntiAffinity" (or .podAntiAffinityRequired .podAntiAffinityPreferred) }}
{{- $dummy := set . "hasAffinity" (or .hasNodeAffinity .hasPodAffinity .hasPodAntiAffinity) }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these helpers seem to include exactly one thing and be used in exactly one place. It seems like there are too many layers of abstraction, and this would be simplified with much fewer helpers.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this PR some of them may be underutilized to motivate their existence, but they were refactored in this way to reduce complexity in the end. It is a recurring issue with complexity, and I deeply agree. I have struggled a lot with the Helm charts to make things work in a manner that isn't breaking DRY in a big way or simply working around challenges with scope.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DRY isn't the most important thing in general. Every design choice is a trade-off, so we have to answer the question: is the redundancy/complexity of having some duplicated logic more of a problem than the complexity of added abstractions needed to avoid repetition. In this case, as far as I can tell, the duplication is less costly than the abstraction.

Extra storage labels are now configurable through config.yaml and
`singleuser.storageExtraLabels`.
<core|user> pods now get `tolerations` for the node taint
`hub.jupyter.org/dedicated=<user|core>:NoSchedule` that could optionally
be added to nodes or all nodes in a node group (aka. node pool).

Note that due to a bug with using the `gcloud` CLI, we also add the
toleration for the same taint where `/` is replaced with `_`.

In this commit, `singleuser.extraTolerations` are now also made
configurable allowing you to add your own custom tolerations to the
user pods.
These affinities allow for more fine grained control of where a pod will
schedule.
@minrk minrk merged commit 7d3255f into jupyterhub:master Sep 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants