Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace namespace include/exclude filter with label selector #1499

Open
nabokihms opened this issue Aug 16, 2024 · 7 comments
Open

Replace namespace include/exclude filter with label selector #1499

nabokihms opened this issue Aug 16, 2024 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@nabokihms
Copy link

Is your feature request related to a problem? Please describe.
I have productive and test namespaces labeled accordingly (env: prod, env: test). I'd like to deschedule pods based on these labels. As of today, every time engineers create new namespace, it is required to go and change the rescheduled config as well.

This problem affects the ability to provide self-service for my teams.

Describe the solution you'd like
I'd like to see namespaces.labelSelector option which can filter namespaces based on labels. This is more flexible in all terms.

Including and excluding namespaces by names (as it works now) can also be implemented using this option (so we no longer need include/exclude at all), e.g.,

namespaces:
  include:
  - ns-a
  - ns-b
  - ns-c

equals to

labelSelector:
  matchExpressions:
    - key: kubernetes.io/metadata.name
      operator: In
      values: [ns-a, ns-b, ns-c]

Describe alternatives you've considered

--

What version of descheduler are you using?

descheduler version: latest

Additional context

@nabokihms nabokihms added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 16, 2024
@damemi
Copy link
Contributor

damemi commented Aug 16, 2024

+1 this is honestly how we probably should have done it. @ingvagabund @knelasevero do you know any reason why we did the include/exclude pattern instead of just label selectors?

@ingvagabund
Copy link
Contributor

ingvagabund commented Aug 18, 2024

We can not guarantee mutual exclusion of included and excluded namespaces with the whole expressiveness of labelSelector. If there's an non-empty intersection of the namespaces should we ignore the shared namespaces? Should we be more restrictive and ignore the shared included namespaces? Should we error instead and let user re-label the namespaces or update the configuration? We decided to aim for simplicity and use explicit list of included/excluded namespaces.

@nabokihms
Copy link
Author

@ingvagabund thanks for your answer, but it is puzzling. Are you saying that the feature will not be accepted because it is not a simple feature?

What is with the use case then? What can you suggest go solve the issue?

@damemi
Copy link
Contributor

damemi commented Aug 19, 2024

@ingvagabund if we get rid of the include and exclude fields and make it one label selector (ie, where people can set IN or NOT IN themselves) then I don't think there's any issue with conflicting includes/excludes. It just becomes label selector semantics

@ingvagabund
Copy link
Contributor

ingvagabund commented Aug 29, 2024

What you are saying makes sense. Still trying to recall why we decided to use combination of included/excluded namespaces. I can think of performance reasons. If we allow combination of IN and NOT IN it might be time expensive to compute the final set of matched namespaces. With a list of only included or (exclusive or) only excluded namespaces it's fast to get a list of matched namespaces. If we allow combination of both the computation might get expensive. E.g.

labelSelector:
  matchExpressions:
    - key: labelKey1
      operator: In
      values: [labelValue1]
    - key: kubernetes.io/metadata.name
      operator: NotIn
      values: [ns-10000, ns-9999, ns-9998]

When the first expression matches most of the 10 000 namespaces it might take a while to enumerate the list. We have not made any performance measurements in big clusters (or clusters with many many namespaces) so it's all theoretical here.

@ingvagabund
Copy link
Contributor

ingvagabund commented Aug 29, 2024

What we might do is to print a warning when the number of matched namespace exceeds certain limit (e.g. 1000) and warn a user the computation cost might get expensive. Worth checking how much time it takes to get those 1000 (and/or a different number(s)) matched namespaces. I wonder if there's a limit on the number of matchExpressions terms. One might (accidentally) do e.g.:

labelSelector:
  matchExpressions:
    - key: kubernetes.io/metadata.name
      operator: NotIn
      values: [ns-10000]
    - key: kubernetes.io/metadata.name
      operator: NotIn
      values: [ns-10000, ns-9999]
    - key: kubernetes.io/metadata.name
      operator: NotIn
      values: [ns-10000, ns-9999, ns-9998]
      ...

E.g. the label selector might get automatically generated by a buggy generator with unchecked limits on the number of terms. Do we wanna introduce some limits on the number of terms here?

@ingvagabund
Copy link
Contributor

Also, I wonder to what extend the fake clientset actually implements support for a field selector. The last time I checked it was not there. So when the descheduler gets to run in a dry mode we will need to implement the missing functionality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

3 participants