-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable 'Namespace filtering' for 'LowNodeUtilization' strategy #573
Comments
Hi @jlegido However if you are just trying to make sure the pods are spread among your worker nodes, I would recommend using pod topology spread constraints rather than LowNodeUtilization, anyway. If you would like to continue the discussion on this, please feel free to join in the issue I linked. Closing this as a duplicate |
@damemi: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@damemi many thanks for your quick reply and sorry for duplicating the issue, I had to previously search a little bit. Unless I'm missing something pod topology spread constraints it's not an option for me, since the nginx ingress controller uses |
@jlegido you would place the constraints on the deployment podSpec, and the scheduler will distribute the nginx pods to the appropriate nodes when they are created. The service talks to those pods by label selectors |
Leaving here just in case somebody else have same requirements:
Many thanks again @damemi |
@jlegido no problem, just as a note though using |
Is your feature request related to a problem? Please describe.
Basically my use case is not covered by the prodyct.
Describe the solution you'd like
Enable 'Namespace filtering' for 'LowNodeUtilization' strategy.
Describe alternatives you've considered
Enable 'Label filtering' for 'LowNodeUtilization' strategy.
What version of descheduler are you using?
descheduler version: Commit
31fd097c0a02adee30b98aab838f326ca1c22879
, I guess versionv0.20.0
Additional context
First of all many thanks to all the people involved in this project for their time, I really appreciate it.
My use case is that I need to make sure that for each one of the workers (2 in my case) at least one pod of a certain kind (either filter by namespace or label would suffice). Below scenario does NOT meets my requirements:
I have 2 worker nodes,
k8s3
andk8s4
:But all pods (from a certain namespace) are running on worker
k8s3
:My configmap is as below:
Yes, I know, I added a non working option
namespaces
just in case, but not working.Thanks.
The text was updated successfully, but these errors were encountered: