Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS CLI and kubectl don't work correctly when using custom file locations #7956

Closed
mike503 opened this issue Jun 8, 2023 · 2 comments
Closed

Comments

@mike503
Copy link

mike503 commented Jun 8, 2023

Describe the bug

Trying to write a small suite of scripts and want to ensure they use separate files and not the built-in ~/.kube/config and ~/.aws/credentials as those are managed by other things or already custom per-user.

The utility scripts I'm creating are supposed to be self-contained. AWS CLI commands by themselves worked fine with AWS_SHARED_CREDENTIAL_FILE, but it doesn't seem to work when using kubectl with update-kubeconfig creating the configuration (and even if not, simply using aws eks get-token under the hood in kubectl)

Expected Behavior

Both tools seem to have support for alternative config/credential file locations, but don't seem to match up in how to pass it properly when BOTH of them are doing that.

Current Behavior

~/.kube/config does not include the appropriate AWS_SHARED_CREDENTIALS_FILE env variable. however, even when providing that, something doesn't quite work.

I noticed that years ago there was a bug report regarding this #7724, which had a reference to the implementation in https://raw.githubusercontent.com/aws/aws-cli/1.24.4/CHANGELOG.rst, under 1.7.45:

* feature:Shared Credentials File: Add support for changing the shared credentials file from the default location of ``~/.aws/credentials`` by setting the ``AWS_SHARED_CREDENTIALS_FILE`` 

looks like it's the true fix was in the botocore library, boto/botocore#623 (I couldn't figure out which version of botocore this got merged into, ultimately)

Didn't work: Attempt with pip3 install awscli for me on Amazon Linux 2 - I tried it for kicks.
Version: aws-cli/1.27.149 Python/3.7.16 Linux/5.10.109-104.500.amzn2.x86_64 botocore/1.29.149

Didn't work: RPM original version installed - I noticed this file on my system from python2-botocore-1.18.6-1.amzn2.0.3.noarch, so I see it's mentioned. This was from an RPM install at one point.

/usr/lib/python2.7/site-packages/botocore/configprovider.py:

   # This is the shared credentials file amongst sdks.
    'credentials_file': (None, 'AWS_SHARED_CREDENTIALS_FILE',
                         '~/.aws/credentials', None),

Reproduction Steps

Easy to reproduce. Assuming ~/.aws/custom.creds is valid:

cat ~/.aws/custom.creds:
[test]
aws_access_key_id = ...
aws_secret_access_key = ...

Then run this - expectation is ~/.kube/custom.config will be generated (it is) with AWS_PROFILE of "test" added but not AWS_SHARED_CREDENTIALS_FILE.

$ export AWS_SHARED_CREDENTIALS_FILE=~/.aws/custom.creds
$ export KUBECONFIG=~/.kube/custom.config
$ aws eks update-kubeconfig --name CLUSTER_NAME --profile test
Added new context arn:aws:eks:us-east-1:xxx:cluster/CLUSTER_NAME to /full/path/.kube/custom.config
$ kubectl version --short
Client Version: v1.27.2
Kustomize Version: v5.0.1
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Manually editing custom.config to add in AWS_SHARED_CREDENTIALS_FILE still doesn't seem to work. Giving it an invalid path it errors out as expected, but for some reason I can't figure out why pointing it to the same file it created the kube config from and I can confirm it's privileges/etc. I can use the same credentials with standard paths. Somewhere there is a short circuit and I cannot figure out where it is.

Possible Solution

Someone just set both environment variables to alternate files and give it a go and figure out how to make them honor it properly :)

Additional Information/Context

python --version
Python 2.7.18

python3 --version
Python 3.7.16

kubectl version
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}

Kernel: 5.10.109-104.500.amzn2.x86_64

CLI version used

aws-cli/2.11.26 Python/3.11.3 Linux/5.10.109-104.500.amzn2.x86_64 exe/x86_64.amzn.2 prompt/off

Environment details (OS name and version, etc.)

Amazon Linux 2

@mike503 mike503 added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Jun 8, 2023
@mike503
Copy link
Author

mike503 commented Jun 8, 2023

totally ignoring a custom kubeconfig location for now.

I have tried my best to strace this, I can see a response back from the aws eks get-token call inside of the dump. so it successfully makes that call. but then it doesn't seem to use the output properly? I'd expect to see that

# 1. works - AWS can utilize these credentials fine
$ AWS_SHARED_CREDENTIALS_FILE=~/.aws/custom.creds aws --region us-east-1 eks get-token --cluster-name CLUSTER_NAME

# 2. kubectl should work here, passing through the AWS CLI exactly like I did by hand above, but it fails
$ AWS_SHARED_CREDENTIALS_FILE=~/.aws/custom.creds kubectl version --short
error: You must be logged in to the server (the server has asked for the client to provide credentials)

# 3. pointing it at a non-existent file, fails as expected, but proves it is solely relying on that file
$ AWS_SHARED_CREDENTIALS_FILE=~/.aws/custom.creds.404 kubectl version --short
Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255

@indrora
Copy link

indrora commented Jun 8, 2023

Does this replicate under the CLI v2?

(you will have to uninstall the version from Pip)

@indrora indrora added v1 and removed needs-triage This issue or PR still needs to be triaged. labels Jun 8, 2023
@nateprewitt nateprewitt removed the v1 label Jun 8, 2023
@indrora indrora added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Jun 20, 2023
@tim-finnigan tim-finnigan added closing-soon This issue will automatically close in 4 days unless further comments are made. and removed bug This issue is a bug. response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. labels Jul 24, 2023
@github-actions github-actions bot added closed-for-staleness and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels Jul 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants