-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance Binpack for dominant resources #410
Comments
/area scheduling |
/cc @lminzhw |
Is #380 addressed this? |
Do we have any update for this ? We have the same use-case as well. Currently, we pre-reserve several CPU resources for GPUs. |
Hope we can have this in 0.3; if you have any proposal, please feel free to share it :) |
Hello 👋 Looks like there was no activity on this issue for last 90 days. |
Closing for now as there was no activity for last 60 days after marked as stale, let us know if you need this to be reopened! 🤗 |
Hello 👋 Looks like there was no activity on this issue for last 90 days. |
Closing for now as there was no activity for last 60 days after marked as stale, let us know if you need this to be reopened! 🤗 |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
Description:
In binpack plugin, we consider all resources almost equally (with weight); but there's a problem that the cpu/mem maybe exhausted but GPUs are idle, and those GPU can not be used because GPU apps also need some cpu/mem to run.
The solution in my mind is to also consider dominant resource of node; and the binpack plugin should consider the dominant resource of Pod and Node together.
The text was updated successfully, but these errors were encountered: