-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] Occasional failures when creating k8s resources during cluster update (needs AWS CLI upgrade) #6279
Comments
Thanks for reporting |
Any chance you can grab cloudwatch logs from the resource provider so I can have a bit more visibility into the issue? |
@eladb Do you mean this?
|
Thanks, seems like this is related: aws/aws-cli#3914 The will fail update-kubeconfig if the cluster status is |
@eladb Do you plan to release a fix for it? |
This is fixed by this commit and released in AWS CLI 1.18.70. @pahud the aws-eks I can see that latest version of the kubectl layer (2.0.0-beta3) currently uses AWS CLI 1.18.37. What would it take to release a new version with the latest CLI and update the aws-eks module to use it? |
Should be resolved by #7216 |
After #5540 is merged there is still a possibility to fail when creating EKS cluster with added kubernetes resources. It doesn't happens always. The problem is in kubectl-handler python lambdas which use "aws eks update-kubeconfig" as a way to "log in" to the kubernetes cluster. The command can fail with output "Cluster status not active" which is not retried and results in fail of whole stack creation.
Reproduction Steps
Create EKS stack and add kubernetes resources to it by using methods like addResources()
Error Log
Environment
Other
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: