Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to connect if using a KUBECONFIG variable with multiple files #829

Open
huats opened this issue Aug 3, 2020 · 22 comments
Open

Unable to connect if using a KUBECONFIG variable with multiple files #829

huats opened this issue Aug 3, 2020 · 22 comments
Labels

Comments

@huats
Copy link

huats commented Aug 3, 2020

If you have a KUBECONFIG with multiple different files, each separated with a ':' you get a "Unable to connect to context" message from k9s, while you can use the various file when they are used separtely with --kubeconfig.

@derailed derailed added the bug Something isn't working label Aug 9, 2020
@derailed derailed added norepro and removed bug Something isn't working labels Sep 10, 2020
@alephnull
Copy link

alephnull commented Oct 5, 2020

With KUBECONFIG set to a list of files, I see a bunch of Dial k8s failure and access denied failures. With --kubeconfig k9s behaves just fine.

With k9s --context something:

5:56PM DBG Active Context "something"
5:56PM ERR refine failed error="The specified context \"something\" does not exists in kubeconfig"
5:56PM ERR failed to connect to cluster error="context \"something\" does not exist"
5:56PM INF No context specific skin file found -- /home/alok/.k9s/something_skin.yml
5:56PM INF No skin file found -- /home/alok/.k9s/skin.yml. Loading stock skins.
5:56PM DBG CURRENT-NS "" -- No active namespace specified
5:56PM INF No namespace specified using cluster default namespace
5:56PM DBG Factory START with ns `""
5:56PM ERR PreferredRES - No API server connection
5:56PM WRN Fail CRDs load error="ACCESS -- No API server connection"
5:56PM DBG CustomView watching `/home/alok/.k9s/views.yml
5:56PM ERR Custom view load failed /home/alok/.k9s/views.yml error="open /home/alok/.k9s/views.yml: no such file or directory"
5:56PM ERR CustomView watcher failed error="no such file or directory"
5:56PM DBG [Config] Saving configuration...
5:56PM ERR restConfig load failed error="context \"something\" does not exist"
5:56PM ERR PreferredRES - No API server connection
5:56PM WRN Fail CRDs load error="ACCESS -- No API server connection"
5:56PM DBG [Config] Saving configuration...
5:56PM DBG BRO-STOP contexts
5:56PM ERR restConfig load failed error="context \"something\" does not exist"
5:56PM DBG BRO-STOP contexts
5:56PM WRN Conn check failed (1/15)
5:56PM DBG TABLE-MODEL canceled -- "contexts"

With k9s --kubeconfig sometihng.yaml:

5:59PM DBG Active Context "something"
5:59PM INF ✅ Kubernetes connectivity
5:59PM DBG [Config] Saving configuration...
5:59PM INF No context specific skin file found -- /home/alok/.k9s/something_skin.yml
5:59PM INF No skin file found -- /home/alok/.k9s/skin.yml. Loading stock skins.
5:59PM DBG CURRENT-NS "" -- No active namespace specified
5:59PM INF No namespace specified using cluster default namespace
5:59PM DBG Factory START with ns `""

5:59PM DBG CustomView watching `/home/alok/.k9s/views.yml
5:59PM ERR Custom view load failed /home/alok/.k9s/views.yml error="open /home/alok/.k9s/views.yml: no such file or directory"
5:59PM ERR CustomView watcher failed error="no such file or directory"
5:59PM DBG Setting active ns "default"
5:59PM DBG [Config] Saving configuration...
5:59PM DBG [Config] Saving configuration...
5:59PM DBG BRO-STOP v1/pods
5:59PM DBG Setting active ns "all"
5:59PM DBG [Config] Saving configuration...
5:59PM DBG BRO-STOP v1/pods
5:59PM DBG TABLE-MODEL canceled -- "v1/pods"
5:59PM DBG Setting active ns ""
5:59PM DBG [Config] Saving configuration...
% k9s version
 ____  __.________       
|    |/ _/   __   \______
|      < \____    /  ___/
|    |  \   /    /\___ \ 
|____|__ \ /____//____  >
        \/            \/ 

Version:    0.22.1
Commit:     2e04a846e668f67af207a1030a310b5e2c864231
Date:       2020-09-18T19:53:17Z

@edobry
Copy link

edobry commented Oct 9, 2020

This works for me just fine actually, and I've been using a compound KUBECONFIG for a while.

@BarakStout
Copy link

Can confirm. I run K9s with multiple clusters in my KUBECONFIG separated by :. This looks more like a problem with your config file.

@longwa
Copy link

longwa commented Aug 9, 2021

I'm also having this problem, although only on my M1 MacBook running Big Sur. It seems to work fine on my Intel iMac running Catalina.

I'm also using ZSH as my shell if that makes any difference.

It seems as if k9s can only connect to clusters that are in the default ~/.kube/config file. Anything loaded from the other files in the KUBECONFIG will fail to connect. If the same clusters are added to the default config file, they work fine.

kubectl works fine for all of the clusters in my case.

@longwa
Copy link

longwa commented Aug 9, 2021

This is the error I get:

4:10PM WRN Fail CRDs load error="ACCESS -- No API server connection"
4:10PM WRN Unable to dial discovery API error="No connection to cached dial"
4:10PM ERR K9s can't connect to cluster error="Get \"https://x.x.x.x:6443/version?timeout=5s\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"
4:10PM ERR ClusterUpdater failed error="Conn check failed (1/5)"

@longwa
Copy link

longwa commented Aug 9, 2021

If I run kubectl config view --flatten > ~/.kube/config then k9s will work fine, even if I leave the multiple configs in place via KUBECONFIG.

@andrey-gava
Copy link

Have same problem when using env $KUBECONFIG

echo $KUBECONFIG
/home/gava/.kube/config:/home/gava/.kube/dsd-dev-config

@longwa
Copy link

longwa commented Aug 16, 2021

It must be some kind of shell configuration issue b/c it only gives me trouble on my MBP and not my iMac, despite both having the exact same shell (ZSH) and using the same KUBECONFIG variable and kubeconfig files.

@Hybrid512
Copy link

Had the issue too but in my case, one of my config file was faulty (pointing to a deleted cluster).
Might be an issue when a context is faulty.
However, this should work anyway ... just shouting when trying to connect a faulty context but should be ok for the others.

@tdensmore
Copy link

tdensmore commented Oct 3, 2021

It must be some kind of shell configuration issue b/c it only gives me trouble on my MBP and not my iMac, despite both having the exact same shell (ZSH) and using the same KUBECONFIG variable and kubeconfig files.

I am getting this error using the Docker container. I do NOT get the error when I install k9s using brew.
I also have Zsh installed.

@neogeogre
Copy link

I am getting this error using the Docker container.

same for me

@gharia
Copy link

gharia commented Jun 30, 2022

I upgraded to latest k9s and it started for me as well. I am not using KUBECONFIG and the config is being used from default location ~/.kube/config.

Anyone found a proper resolution?

@alephnull
Copy link

OP here, this is not a problem for me and hasn't been for a while.

@prithvireddy
Copy link

Same here, after the upgrade it started giving me this error

@slimus
Copy link
Collaborator

slimus commented Jul 14, 2022

@prithvireddy could you please share debug logs? Thanks!

@gharia
Copy link

gharia commented Jul 15, 2022

fyi, I was able to fix it. I did some trial and error kinda steps. And don't remember exactly what fixed it.

  • I updated was cli version to the latest
  • I update kubectl to the latest version
  • Most importantly, I deleted kubeconfig file (~/.kube/config) and re-generated using aws eks update-kubeconfig

@elviento
Copy link

same issue here, this resolved it eg. rolling back to k9s v0.25.18 here

@asadk23
Copy link

asadk23 commented Jul 26, 2022

Fresh install of k9s v0.26.0
No prior experience with k9s :|
kubectl v1.15
using zsh

getting the following debug logs

11:09AM ERR ClusterUpdater failed error="Conn check failed (1/5)"
11:09AM DBG TABLE-UPDATER canceled -- "contexts"
11:09AM ERR Unable to connect to api server error="exec plugin: invalid apiVersion \"client.authentication.k8s.io/v1alpha1\""
11:09AM ERR ClusterUpdater failed error="Conn check failed (2/5)"
11:09AM ERR Unable to connect to api server error="exec plugin: invalid apiVersion \"client.authentication.k8s.io/v1alpha1\""
11:09AM ERR ClusterUpdater failed error="Conn check failed (3/5)"
11:09AM ERR Unable to connect to api server error="exec plugin: invalid apiVersion \"client.authentication.k8s.io/v1alpha1\""
11:09AM ERR ClusterUpdater failed error="Conn check failed (4/5)"
11:09AM ERR Unable to connect to api server error="exec plugin: invalid apiVersion \"client.authentication.k8s.io/v1alpha1\""
11:09AM ERR Conn check failed (5/5). Bailing out!```

@slimus
Copy link
Collaborator

slimus commented Jul 26, 2022

@asadk12 please take a look #1675 and #1630 and #1619 (comment)

@asadk23
Copy link

asadk23 commented Jul 26, 2022

Thanks for the pointers @slimus

In my case the issue is indeed with the version for client.authentication.k8s.io/v1alpha1
It should be changed to v1beta1

I updated the awscli 2.0.10 -> 2.7.18
ran aws eks update-kubeconfig which automatically updates the version as above and am able to connect k9s to my desired cluster

Thanks for the help! 🙏🏼

@mnpenner
Copy link

mnpenner commented Aug 5, 2022

k9s definitely seems to be doing something weird here. I'm on Windows, running k9s under WSL. My $KUBECONIFG looks like

echo $KUBECONFIG
/home/mpen/.kube/config:/mnt/c/Users/Mark/.kube/config

i.e. merging my Windows and WSL configs.

My Windows config has in it:

- name: do-sfo3-secret-cluster-admin
  user:
    exec:
      ...
      command: C:\Users\Mark\bin\doctl.exe

My Linux one has

- name: do-sfo3-secret-cluster-admin
  user:
    exec:
      ...
      command: doctl

But if I do

❯ kubectl config view --flatten | grep doctl
      command: doctl

You will see it's using the Linux version of doctl from /home/mpen/.kube/config

If I run k9s like this:

k9s --kubeconfig =(kubectl config view --flatten)

It still doesn't work:

image

But if I hack my Windows kube config to remove doctl.exe, then I can connect with k9s.

Then if I put my config back to how I had it and run k9s with k9s --kubeconfig =(kubectl config view --flatten) again, it bypasses the :contexts screen and connects to my pods no problemo.

And I know it's trying to use doctl.exe when it shouldn't because my logs say

7:51PM ERR can't connect to cluster error="Get \"https://***.k8s.ondigitalocean.com/version?timeout=10s\": getting credentials: exec: executable C:\\Users\\Mark\\bin\\doctl.exe not found\n\nIt looks like you are trying to use a client-go credential plugin that is not installed.\n\nTo learn more about this feature, consult the documentation available at:\n      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins"

@bhurlow
Copy link

bhurlow commented Jun 21, 2023

can confirm @asadk12's comment, you may need to update your AWS cli version to get an up to date kube config for k9s compat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests