-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partial preemption of workloads #975
Comments
/cc |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Thanks for the great project. We have a very similar requirement to what @ahg-g outlines. |
We have yet to discuss this feature, but we're open to discussion. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
A related, but more specialized issue: #3762 |
What would you like to be added:
Partial preemption of workloads. Currently preemption is performed for the whole workload, for example when giving back borrowed capacity. This is too aggressive for workloads that tolerate downscaling (e.g., a Ray cluster).
We can come up with a heuristic to select which podset to downscale, could be as simple as going by their order in addition to having a flag indicating which ones can downscale and which can't (and so at the extreme just preempt the whole workload).
Why is this needed:
To limit disruptions caused by preemption.
Completion requirements:
This enhancement requires the following artifacts:
The artifacts should be linked in subsequent comments.
The text was updated successfully, but these errors were encountered: