Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Part 12 of #758] Preset: scheduling.podAffinity.preferScheduleNextToRealUsers #930

Conversation

consideRatio
Copy link
Member

@consideRatio consideRatio commented Sep 11, 2018

Warning

This PR needs rebase before merge. The new content of this PR is actually only the following commit.


Closing remarks

Decided to be too detailed and of too little use.

About

When a user pod is about to schedule, should it favor scheduling on a
node with real users on it? A real user is a user pod as compared to a
user-placeholder pod.

Enabling this will pack real users together with each other even in
situations where a few situations where the scheduler may fail to
discern two nodes from a resource request perspective.

Note that this is a minor tweak, and is not recommended for very large
clusters as it can reduce the schedulers performance.

We are now using a trick to restart the hub whenever the hash of the
hub's configmap changes. This commit modernizes old logic from when
that trick wasn't in use and reduces the need for an inline explanation
of the code.
```yaml
hub.jupyter.org/storage-kind: user # or core, for the hub-db-dir
```

Extra storage labels are also configurable through config.yaml and
`singleuser.storageExtraLabels`.
<core|user> pods now get `tolerations` for the node taint
`hub.jupyter.org/dedicated=<user|core>:NoSchedule` that could optionally
be added to nodes or all nodes in a node group (aka. node pool).

Note that due to a bug with using the `gcloud` CLI, we also add the
toleration for the same taint where `/` is replaced with `_`.

In this commit, `singleuser.extraTolerations` are now also made
configurable allowing you to add your own custom tolerations to the
user pods.
These affinities allow for more fine grained control of where a pod will 
schedule.
Read about them in schema.yaml
When a user pod is about to schedule, should it favor scheduling on a
node with real users on it? A real user is a user pod as compared to a
user-placeholder pod.

Enabling this will pack real users together with each other even in
situations where a few situations where the scheduler may fail to
discern two nodes from a resource request perspective.

Note that this is a minor tweak, and is not recommended for very large
clusters as it can reduce the schedulers performance.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant