Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't connect to AWS cluster #2

Open
tibberg opened this issue Sep 4, 2020 · 10 comments
Open

Can't connect to AWS cluster #2

tibberg opened this issue Sep 4, 2020 · 10 comments

Comments

@tibberg
Copy link

tibberg commented Sep 4, 2020

I can open the minikube cluster, but if I try to open an AWS connection, I got "Boom!! K9s can't connect to cluster."

I suspect that either $KUBECONFIG is ignored, or the aws-iam-authenticator cannot read the ~/.aws/credentials in a confined env, but I don't know much about snaps.

@nsg
Copy link
Owner

nsg commented Sep 5, 2020

It's correct that KUBECONFIG is not exposed inside the snap. Because snaps runs inside a restricted sandbox only a few filtered environment variables are passed in to the snap. I do not think there is any way to change this, but I will investigate it because it would be useful to pass the KUBECONFIG variable. You can use --kubeconfig to point to a specific config file, --kubeconfig $KUBECONFIG should work.

It's also correct that the snap has not the permissions to read inside ~/.aws, this snap has only read access to ~/.kube/ and read/write to the snaps working directory ~/snap/k9s-nsg/current/.

The easiest way it probably to copy/move the credentials over to ~/.kube, a symlink will not work but a bind mount should. Let me know if you need more help or if there is anything else I can do to improve the snap.

@tibberg
Copy link
Author

tibberg commented Sep 8, 2020

You can use --kubeconfig to point to a specific config file, --kubeconfig $KUBECONFIG should work.

Unfortunately this does not work because

$ echo $KUBECONFIG 
/home/tib/.kube/config:/home/tib/.kube/stg-config:/home/tib/.kube/dev-config

and even if on startup I see a report on the console that the config files are (linked/copied?) but the k9s-nsg gives the error:
stat /home/tib/.kube/config:/home/tib/.kube/stg-config:/home/tib/.kube/dev-config: no such file or directory

I think the root of this problem is that I use separate config file for each cluster and I merge these configs with a KUBECONFIG variable, and k9s cannot handle multiple configfiles separated with :. Of course I could start the process with one file passed in the --kubeconfig parameter but in that case I cannot switch context within k9s (with the :ctx command)

The easiest way it probably to copy/move the credentials over to ~/.kube, a symlink will not work but a bind mount should. Let me know if you need more help or if there is anything else I can do to improve the snap.

I should figure out how to specify the location of the credentials file for the aws-iam-authenticator command. Do you have any idea?

@nsg
Copy link
Owner

nsg commented Sep 12, 2020

I should figure out how to specify the location of the credentials file for the aws-iam-authenticator command. Do you have any idea?

I have never used aws-iam-authenticator so I have no idea.

Unfortunately this does not work because /.../

Ah, of course.

and even if on startup I see a report on the console that the config files are (linked/copied?) /.../

Linked, $HOME is ~/snap/k9s-nsg/current/ inside the sandbox. The k9s-nsg inside the snap is a wrapper script that prepares the environment for k9s. One of the things I do is to link the relevant files from ~/.kube to ~/snap/k9s-nsg/current/.kube.

I guess multiple files are not supported with vanilla k9s either, derailed/k9s#829
I will experiment if there are some workaround I can do with the multiple files thing.

@nsg
Copy link
Owner

nsg commented Sep 12, 2020

By the way, symlinks outside ~/.kube and ~/snap/k9s-nsg will not work. I have never tested this my self but I think mount binds works, so for example:

mkdir $HOME/.foobar
sudo mount -o bind $HOME/.kube $HOME/.foobar

Now, both .kube and .foobar is now the same directory. That will probably trick AWS. I have no idea if this is a good idea :)

@tibberg
Copy link
Author

tibberg commented Sep 16, 2020

Ok, I've checked how the aws-iam-authenticator is finding the credentials file, and it seems the location can be overridden with the env variable AWS_SHARED_CREDENTIALS_FILE, see shared_credentials_provider.go

So as I understand I need to get my credentials file into the ~/snap/k9s-nsg/current/.kube and have the AWS_SHARED_CREDENTIALS_FILE point to this location. As my k9s config has a reference to the executable without path I need to have the executable inside the snap as well, right?

  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - dev
      command: aws-iam-authenticator
      env:
      - name: AWS_PROFILE
        value: [redacted]

BTW the vanilla k9s can use multiple config files.

@nsg
Copy link
Owner

nsg commented Sep 16, 2020

So as I understand I need to get my credentials file into the ~/snap/k9s-nsg/current/.kube /.../

Yes, or just ~/.kube

As my k9s config has a reference to the executable without path I need to have the executable inside the snap as well, right?

Yeah, if the aws-iam-authenticator is a self contained binary (or similar) it may work by copying it to ~/snap/k9s-nsg/current/ and try to execute it with an absolute path. I have not considered this use case, need to investigate this.

BTW the vanilla k9s can use multiple config files.

Yes, I just realized that. The linked issue has been updated with more information now. I split my .kube/config to two separate files, and set a KUBECONFIG to both files. It works for me, the environment variable is available inside the snap and I can see both clusters. If this is not working for you, make sure that it's exported (export KUBECONFIG in Bash).

If you like to "jump in" to the snaps environment you can run snap run --shell k9s-nsg.

@tibberg
Copy link
Author

tibberg commented Sep 16, 2020

OK,

  • copied the aws-iam-authenticator executable to ~/.kube (it is a statically linked binary)
  • changed the k9s config to point to ./aws-iam-authenticator
  • copied the credentials file to .kube
  • exported AWS_SHARED_CREDENTIALS_FILE env variable with value /home/tib/.kube/credentials
  • exported KUBECONFIG

it still works on my locally installed k9s while the k9s-nsg is still not working. I've jumped in to the snap env and I see that the aws-iam-authenticator is not exacutable inside.

@nsg
Copy link
Owner

nsg commented Sep 17, 2020

copied the aws-iam-authenticator executable to ~/.kube (it is a statically linked binary)
changed the k9s config to point to ./aws-iam-authenticator

K9s will try to open that file relative to the configuration file. From the point of view of kubectl that's ~/snap/k9s-nsg/current/.kube/credentials. So you need to move the binary there, or use an absolute path in the configuration.

/../ is not exacutable inside.

Are you unable to execute the script? Any messages?

nsg added a commit that referenced this issue Sep 27, 2020
Add KUBECONFIG and External commands sections from discussion in
issue #2. The interface is also auto connected so I added a "may",
for the future, possible remove that sentence.
@joeky888
Copy link

Same here. I have to set confinement: classic to get it work.

I found that https://github.com/snapcrafters/helm and https://snapcraft.io/devoperator
are using classic confinement.

@nik123
Copy link

nik123 commented Jun 29, 2022

I'll double @joeky888 opinion. Several development and DevOps tools use classic confinment and so should k9s. I presume it may save a lot of time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants