-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to connect if using a KUBECONFIG variable with multiple files #829
Comments
With With
With
|
This works for me just fine actually, and I've been using a compound |
Can confirm. I run K9s with multiple clusters in my |
I'm also having this problem, although only on my M1 MacBook running Big Sur. It seems to work fine on my Intel iMac running Catalina. I'm also using ZSH as my shell if that makes any difference. It seems as if k9s can only connect to clusters that are in the default ~/.kube/config file. Anything loaded from the other files in the KUBECONFIG will fail to connect. If the same clusters are added to the default config file, they work fine. kubectl works fine for all of the clusters in my case. |
This is the error I get:
|
If I run |
Have same problem when using env $KUBECONFIG
|
It must be some kind of shell configuration issue b/c it only gives me trouble on my MBP and not my iMac, despite both having the exact same shell (ZSH) and using the same KUBECONFIG variable and kubeconfig files. |
Had the issue too but in my case, one of my config file was faulty (pointing to a deleted cluster). |
I am getting this error using the Docker container. I do NOT get the error when I install k9s using |
same for me |
I upgraded to latest k9s and it started for me as well. I am not using KUBECONFIG and the config is being used from default location ~/.kube/config. Anyone found a proper resolution? |
OP here, this is not a problem for me and hasn't been for a while. |
Same here, after the upgrade it started giving me this error |
@prithvireddy could you please share debug logs? Thanks! |
fyi, I was able to fix it. I did some trial and error kinda steps. And don't remember exactly what fixed it.
|
same issue here, this resolved it eg. rolling back to k9s |
Fresh install of k9s getting the following debug logs
|
@asadk12 please take a look #1675 and #1630 and #1619 (comment) |
Thanks for the pointers @slimus In my case the issue is indeed with the version for I updated the awscli Thanks for the help! 🙏🏼 |
k9s definitely seems to be doing something weird here. I'm on Windows, running k9s under WSL. My $KUBECONIFG looks like ❯ echo $KUBECONFIG
/home/mpen/.kube/config:/mnt/c/Users/Mark/.kube/config i.e. merging my Windows and WSL configs. My Windows config has in it: - name: do-sfo3-secret-cluster-admin
user:
exec:
...
command: C:\Users\Mark\bin\doctl.exe My Linux one has - name: do-sfo3-secret-cluster-admin
user:
exec:
...
command: doctl But if I do ❯ kubectl config view --flatten | grep doctl
command: doctl You will see it's using the Linux version of doctl from If I run k9s like this: k9s --kubeconfig =(kubectl config view --flatten) It still doesn't work: But if I hack my Windows kube config to remove doctl.exe, then I can connect with k9s. Then if I put my config back to how I had it and run k9s with And I know it's trying to use doctl.exe when it shouldn't because my logs say
|
can confirm @asadk12's comment, you may need to update your AWS cli version to get an up to date kube config for k9s compat |
If you have a KUBECONFIG with multiple different files, each separated with a ':' you get a "Unable to connect to context" message from k9s, while you can use the various file when they are used separtely with --kubeconfig.
The text was updated successfully, but these errors were encountered: