Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes should adhere to the "XDG Base Directory Specification" #56402

Closed
ensonic opened this issue Nov 27, 2017 · 47 comments
Closed

kubernetes should adhere to the "XDG Base Directory Specification" #56402

ensonic opened this issue Nov 27, 2017 · 47 comments
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@ensonic
Copy link

ensonic commented Nov 27, 2017

/kind feature
/sig cli

What happened:
On linux several directories and files are created under ~/.kube/:

s -l ~/.kube/
total 32
drwxr-x---  3 ensonic users 4096 Jan 11  2017 cache
-rw-------  1 ensonic users 9052 Nov 27 10:41 config
drwxr-x---  3 ensonic users 4096 Nov 27 10:41 http-cache
drwxr-x--- 18 ensonic users 4096 Nov 13 13:06 schema

What you expected to happen:
Essentially the ~/.kube/cache should go to ~/.cache/kube/ and the configs to ~/.config/kube/.
For details see
https://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html

Anything else we need to know?:
The xdg setup eases maintenance (e.g. no need to backup cache dirs). This make it also easier to e.g. work with local tool (e.g. when running tools in docker, mount '.config' and be done istead of hunting each non standard config).

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  • OS (e.g. from /etc/os-release):
    Debian GNU/Linux buster/sid

  • Kernel (e.g. uname -a):
    4.9.0-4-amd64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Debian 4.9.51-1 (2017-09-28) x86_64 GNU/Linux

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 27, 2017
@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 27, 2017
@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Nov 27, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 27, 2017
@razor-x
Copy link

razor-x commented Mar 28, 2018

Would love to see kubernetes make it to this list https://wiki.archlinux.org/index.php/XDG_Base_Directory_support

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 26, 2018
@ensonic
Copy link
Author

ensonic commented Jun 26, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 26, 2018
mikebryant added a commit to mikebryant/dotfiles that referenced this issue Aug 7, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 24, 2018
@razor-x
Copy link

razor-x commented Sep 24, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 24, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 23, 2018
@ensonic
Copy link
Author

ensonic commented Dec 29, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 29, 2019
@MazeChaZer
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 31, 2019
@NelsonJeppesen
Copy link

NelsonJeppesen commented May 23, 2019

/remove-lifecycle stale
🙏 don't go stale

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 22, 2019
@remcohaszing
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 22, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 20, 2019
@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2022
@MazeChaZer
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 16, 2022
@toastal
Copy link

toastal commented May 13, 2022

This thread is embarrassing to read. This bot is useless. 65+ 👍‍ automatically means it's an issue people care about. Instead of you have a mess of bot and human-to-bot noise that makes this issue illegible. People get their time wasted by a bot email too to have to say, "yes, we're still here and interested in this bug--and never stopped being interested".

@tony-sol
Copy link

Totally agree with @toastal

@EvyBongers
Copy link

Any idea when this can be implemented?

@tony-sol
Copy link

@EvyBongers Looks like its slightly moving toward spec, according this PR #109479

@BenTheElder
Copy link
Member

Please see kubernetes/enhancements#2111

I agree that this bot is annoying, it's possible to disable it
/lifecycle frozen

... but to be clear, something like this is not going to be implemented by filing a feature request here for many years now, a breaking change or new feature like this requires a KEP / enhancement proposal to be filed in the enhancements repo with the details reviewed agreed on how to handle it.

You can see in kubernetes/enhancements#2111 that so far the approach is rejected by SIG CLI due to the breaking changes.

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 26, 2022
@Volatus Volatus removed their assignment Aug 17, 2022
@helayoty helayoty added this to SIG CLI Oct 2, 2023
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG CLI Oct 2, 2023
@thepkc
Copy link

thepkc commented Sep 24, 2024

Necromancing this. This is still relevant. This can be made non breaking by having a list of folders to be checked, including the default one. This can even be made non breaking by implementing it through a variable check which is how most applications do it. If a config env file is set it'll look there. If it's not it uses legacy behaviour.

@BenTheElder
Copy link
Member

BenTheElder commented Sep 24, 2024

This can be made non breaking by having a list of folders to be checked, including the default one.

please see the prior discussion linked, for more details on why that's still a problematic approach

roughly: there's a broad ecosystem and the configuration discovery is an API with multiple implementations. Things will break if we introduce skew in discovering the configuration files. The logic for this has been stable for many years and has implementations in many clients

you can always override KUBECONFIG to point to your XDG compliant directory and then all tools will read that.

@BenTheElder
Copy link
Member

To ship a change of this scope someone will have to go through the enhancements process to outline an approved approach to ship it. The previously rejected enhancement is linked above with a lot of relevant discussion.

Unless someone writes up a more convincing enhancement and seeks approval from the kubectl maintainers, this is not happening.

The enhancements process allows us to ensure some rigor in how changes are introduced, feature gated, tested, and finally made GA.

If you're interested in making this happen, I would recommend talking to SIG api-machinery (for the client) and then SIG CLI before filing an enhancement.

@Volatus
Copy link
Contributor

Volatus commented Sep 25, 2024

I like XDG but unfortunately I'll have to agree that it's likely way too much of a breaking change. Especially considering that setting KUBECONFIG will get you 90% of the way. Yes, you can't split the cache directory and what not but it's "enough". Way too many external tools rely on/make assumptions on where the kubeconfig is, and as long as they respect KUBECONFIG env var, they'll work fine. The return is simply not there in my opinion to justify a change like this.

@InSuperposition
Copy link

Is there a point of using semantic versioning (major versions), if breaking changes can't be introduced?

@BenTheElder
Copy link
Member

BenTheElder commented Sep 30, 2024

Is there a point of using semantic versioning (major versions), if breaking changes can't be introduced?

Alpha/Beta features see breaking changes often. GA do not, but technically could under strict rules, and this would need to be communicated as well.

kubeconfig and the flags for it are course, GA.

For every user that wants this particular breaking change, there are many many more that want no breaking changes. Breaking changes to GA interfaces / APIs / ... (including k) are not taken lightly. The benefit needs to outweigh the costs to users and the ecosystem.

Again, if you'd like to propose a major change, the current mechanism to do so is a KEP: https://github.com/kubernetes/enhancements#is-my-thing-an-enhancement
Where the specifics can be worked through and reviewed in a structured manner, and with much more visibility than commenting on an eight year old issue in a repo with ~46k issues.

@BenTheElder
Copy link
Member

BenTheElder commented Sep 30, 2024

cc @soltysh @eddiezane (SIG CLI doesn't appear to have an alias?) @kubernetes/sig-api-machinery-leads

@soltysh
Copy link
Contributor

soltysh commented Nov 6, 2024

cc @soltysh @eddiezane (SIG CLI doesn't appear to have an alias?) @kubernetes/sig-api-machinery-leads

We've considered that long time ago, but the community dependence built around the current locations will not allow us to do that switch. Jordan linked this when we were discussing kuberc (https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/3104-introduce-kuberc/README.md) but decided not to. As you can see https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/2229-kubectl-xdg-base-dir/kep.yaml is marked as rejected, so I guess it's fair to close this issue as we won't be addressing it.

/close

@k8s-ci-robot
Copy link
Contributor

@soltysh: Closing this issue.

In response to this:

cc @soltysh @eddiezane (SIG CLI doesn't appear to have an alias?) @kubernetes/sig-api-machinery-leads

We've considered that long time ago, but the community dependence built around the current locations will not allow us to do that switch. Jordan linked this when we were discussing kuberc (https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/3104-introduce-kuberc/README.md) but decided not to. As you can see https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/2229-kubectl-xdg-base-dir/kep.yaml is marked as rejected, so I guess it's fair to close this issue as we won't be addressing it.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@github-project-automation github-project-automation bot moved this from Needs Triage to Closed in SIG CLI Nov 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.