Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decide on how to handle out-of-tree plugins #1089

Open
knelasevero opened this issue Mar 13, 2023 · 21 comments
Open

Decide on how to handle out-of-tree plugins #1089

knelasevero opened this issue Mar 13, 2023 · 21 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@knelasevero
Copy link
Contributor

knelasevero commented Mar 13, 2023

There are a few ways to do this. One way is to follow the kube-scheduler path and just have a new repo hosting all custom plugins. This can bring some problems related to having to maintain everything in that new repo and could be hard to coordinate ownership.

Wanted to raise this issue so we can discuss other options:

EDIT:

  • just remembered another option. In ESO we have a bunch of providers/plugins in-tree, but we classify them as internally maintained when we think maintainers would want to personally give support to them, and as community maintained (with names of people that would be contacts so people can ping them) when we don't worry too much about e2e tests and support -> ESO stability support. There we still think about having out-of-tree plugins, but I think it is worth to have a look.

.

  • Also another option:
    • have a combination of above. Having a way to guarantee that an image has everything that is supported is useful, in enterprise situations. Images with supported plugins pre-baked helps service providers to be able to quickly say: 'we support that' or 'we don't support that'. Then thinking about both enterprise compliance and usability: the runtime plugins as options and the custom image with plugins inside as the recommended way scratches both itches.
@knelasevero
Copy link
Contributor Author

knelasevero commented Mar 13, 2023

@damemi based on our discussion

@binacs
Copy link
Member

binacs commented Mar 13, 2023

/cc @binacs

@damemi
Copy link
Contributor

damemi commented Mar 13, 2023

Mentioned this offline to @knelasevero, but I think the "shared repo of third-party plugins" approach brings a lot of maintenance overhead and defeats the purpose of a development framework.

The scheduler-plugins repo is essentially another repo for the SIG to maintain. For that, the concept of "out-of-tree" applies better because the "in-tree" alternative refers to code living in the main Kubernetes project repo, which is much larger and slower. For us, an out-of-tree repo would be pretty much just a 2nd descheduler repo.

(For this reason, I think it's better to refer to our plugins as "first-party"/"third-party" since we don't have much of a tree to begin with.)

Code ownership becomes a problem too. As developers contribute third-party plugins to a centralized repo, there's nothing binding them to owning that code forever. People change jobs and companies change priorities, which leaves the SIG in the difficult position of enforcing deprecation labels on abandoned plugins.

I think a much more lightweight approach would be to have a simple index repo (similar to how Krew indexes third-party kubectl plugins and how Operatorhub lets people contribute operators). Then there is basically 0 maintenance for the SIG over third-party plugins. Users can host and maintain them at their own pace, and descheduler maintainers don't have to be gatekeepers reviewing every new plugin.

If an index approach is done right, it can be pretty simple to provide Go code generators to build a descheduler based on the indexed plugins. That lets us provide an image people can grab if that's all they want.

@knelasevero
Copy link
Contributor Author

knelasevero commented Mar 13, 2023

Where can I see this Kueue index? Do you mean Krew?

@damemi
Copy link
Contributor

damemi commented Mar 13, 2023

@a7i
Copy link
Contributor

a7i commented May 12, 2023

linking another PR: #1144

@knelasevero
Copy link
Contributor Author

Hey

https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension

Kube-scheduler is investigating using wasm to extend functionality. I think we could do the same (investigate).

kubernetes/kubernetes#112851 (comment)

We can monitor it and see how it goes.

@damemi
Copy link
Contributor

damemi commented Jul 19, 2023

Looking toward next steps in the framework, I'd like to unblock this. Looking back on it, my previous comment pushing for an index-style plugin repo is maybe a bit lofty for our use case. At a large scale, that makes more sense. But a lot of the issues I was concerned with can be managed.

Kube-scheduler does provide a good precedent for us to follow, even if we don't have to do everything the same way as that repo. So I am +1 for taking that approach. This will be the fastest way for us to unblock more feature development and promote the framework with examples to users.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2024
@pravarag
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2024
@binacs
Copy link
Member

binacs commented Apr 27, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 24, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@binacs
Copy link
Member

binacs commented Sep 25, 2024

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Sep 25, 2024
@k8s-ci-robot
Copy link
Contributor

@binacs: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@seanmalloy
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 4, 2024
@seanmalloy
Copy link
Member

@damemi @ingvagabund @knelasevero @a7i do we have a consensus on how we want to move forward with this? Do we want to move forward with creating another git repo in the kubernetes-sigs org for out of tree descheduler plugins?

I believe I might have some bandwidth to help move this forward.

@seanmalloy
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Oct 4, 2024
@seanmalloy seanmalloy pinned this issue Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

8 participants