Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster-from-kubeconfig relies on us generating a kubeconfig; do we want to do that #9990

Closed
justinsb opened this issue Sep 25, 2020 · 7 comments

Comments

@justinsb
Copy link
Member

Q: How important is the "default cluster from kubeconfig" behaviour to user experience? Should we try to reinstate that behaviour by exporting kubeconfig by default?

Background

Historically, we used the current cluster in kubeconfig to make specifying the cluster name optional for most commands, making the flow a little easier.

Recently we've started not exporting the kubeconfig by default (though you can pass an --admin flag) which means that the cluster name must be specified on every command.

We did this for security, because kubeconfig previously contained non-expiring credentials.

Recently we introduced experimental support for automatically getting a short-lived credential by having kops generate the credential when the current credential had expired. We could therefore safely export a kubeconfig by default again (from a security point of view), except that this functionality is relatively new and not particularly widely tested.

justinsb added a commit to justinsb/kops that referenced this issue Oct 25, 2020
We do log a hint for the user when we have exported an empty kubecfg,
but this now supports the "current cluster" UX.

Issue kubernetes#9990
justinsb added a commit to justinsb/kops that referenced this issue Oct 25, 2020
We do log a hint for the user when we have exported an empty kubecfg,
but this now supports the "current cluster" UX.

Issue kubernetes#9990
johngmyers pushed a commit to johngmyers/kops that referenced this issue Oct 26, 2020
We do log a hint for the user when we have exported an empty kubecfg,
but this now supports the "current cluster" UX.

Issue kubernetes#9990
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 23, 2021
@olemarkus
Copy link
Member

@justinsb guess this can be closed now?

Just running kops export kubecfg --admin also works, defaulting to the current context

/remove-lifecycle rotte

@olemarkus
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 24, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2021
@olemarkus
Copy link
Member

I think we have more or less solved this one now

/close
/remove-lifecycle stale

@k8s-ci-robot
Copy link
Contributor

@olemarkus: Closing this issue.

In response to this:

I think we have more or less solved this one now

/close
/remove-lifecycle stale

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants