-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support labelSelector in RemoveDuplicates #654
Comments
When we first implemented this it was brought up that there could be undesirable effects for some strategies, like removeduplicates (#510 (review)). Thinking about it more now, I think RemoveDuplicates looks at such a specific calculation of "what is a duplicate" that label filtering shouldn't have too much of a problem. It is theoretically possible that 2 pods could have the same namespace/name/image but different labels, which is where this would cause issues (the strategy would not be evicting all the duplicates properly). Honestly, this chance seems low to me @ingvagabund what do you think? I ask because you originally pointed this out |
Thanks Mike. I can elaborate on one the use case I have in mind, just in case it helps. I have a deployment labelled with app=myapp. It needs to run more pods than nodes in the cluster (in our case, 32 pods in a cluster with around 20 nodes). This is something that should work fine since 0.20. However, due to this bug #531 some pods are continuously destroyed when there are cordoned nodes in the cluster, that is something quite common. As a workaround, I would like to tell descheduler to ignore pods with label app=myapp in the RemoveDuplicates strategy. In my opinion, there are some valid use cases where this is useful. Perhaps it could be supported with a note in the documentation explaining the corner case where using the |
Sounds good as long as the labelSelector is used in the eviction phase only. |
Ignore from consideration or from eviction? |
I think in the case of RemoveDuplicates consideration and eviction are basically the same set. Like I mentioned above, there could technically be 2 pods that match the criteria for a duplicate but have different labels, but I think the chance of this is low gtiven the specificity |
Thanks Jan, Mike. I tried to implement it, see https://github.com/kubernetes-sigs/descheduler/compare/master...palmerabollo:label-filter?expand=1 but I'm not able to make the two new unit tests work. I think I should add something here https://github.com/kubernetes-sigs/descheduler/compare/master...palmerabollo:label-filter?expand=1#diff-9589460dd68cfcc1a2a9ff1065a3d60852c6131d1489035bfdac10b8511e2ca0R96 but I'm having a hard time trying to figure it out since I'm not a golang dev (I've copypasted it from other strategies' code). I think it might be an easy task for somebody more familiarized with the code. Any help is truly welcome. |
Going back to this. What's the relation of pods labeled with app=myapp to cordoned nodes? |
There's a hard assumption on the labels. The strategy targets a set of pods owned by the same controlling object (RC, RS, ...). In case a label selector is used (for any reason), you will left some pods out when the code responsible for forcing even distribution is calculating pods for eviction. If not taking the cordoned nodes into account when evicting pods in the strategy is the issue we need to fix that. Instead of making the strategy blind. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@palmerabollo can you please |
@aslafy-z: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@palmerabollo: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Not stale |
This will be available as a DefaultEvictor config in the v1alpha2 config api release, so we should be able to close this (#929, #955) |
@damemi: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is your feature request related to a problem? Please describe.
Some strategies such as RemovePodsViolatingInterPodAntiAffinity support a
labelSelector
parameter. However, RemoveDuplicates does not support it and comes in handy in some scenarios.Describe the solution you'd like
I'd like to support
labelSelector
to define a RemoveDuplicates strategy such as:What version of descheduler are you using?
descheduler version: 0.20 but the same applies to 0.22 afaik
Additional context
Thank you!
The text was updated successfully, but these errors were encountered: