-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add gomaxprocs limit, return node fit error and pod QoS in advance #1423
Conversation
Hi @fanhaouu. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
bba552a
to
03a45ef
Compare
@a7i ,master, can you help me review this pr? |
Hi @fanhaouu great contribution. Going to copy some of the maintainers for feedback as well: My feedback:
|
master, thank you for your reply. point1. At present, descheduler has some goroutine, but the number is not very large, so gomaxproc can be added or not, but the current go runtime cannot recognize the container environment, I think it is better to add it. For example, jvm runtime has long supported container environments; (As descheduler in our company has a lot of goroutine, the performance can be improved significantly after adding this restriction, so I didn't remove this submission from the pr) point2. Because the default loop time is 5min, this time can be adjusted. The larger the cluster, the longer the nodeFit time, so the full loop time can easily exceed 5min. What if the user sets the loop time to 1 minute? Comparing one by one I think is very, very unnecessary, too time-consuming, we should be the same as the filter policy in the scheduler. In short, if we can make the program better, faster, without affecting any functionality, I think it makes a lot of sense. We are also developing a descheduler cache mechanism internally. Currently, descheduler pulls weight data for filtering at policy runtime, which can actually cache a good part of the data in advance, just like the cache in the scheduler. If our company is stable online, I will also contribute to our descheduler community. We can review it together then. |
As Amir mentioned would you please break the PR into three separate PRs? Some of the suggested changes deserves a dedicated discussion. Wrt. NodeFit I am in the process of composing a KEP: #1421. This sounds like a good use case to include in the proposal. |
hi, masters, i have split this into 3 PRs, looking forward to your feedback: |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/close it's already split into 3 PRs |
@a7i: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This PR aims to address the following three issues: