Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS update-config not working when updating cluster #3914

Closed
ktugan opened this issue Feb 7, 2019 · 7 comments · Fixed by #4036
Closed

EKS update-config not working when updating cluster #3914

ktugan opened this issue Feb 7, 2019 · 7 comments · Fixed by #4036
Assignees
Labels
confusing-error feature-request A feature should be added or improved.

Comments

@ktugan
Copy link

ktugan commented Feb 7, 2019

I am not sure if by design. In case it isnt, executing:

aws eks update-kubeconfig --name dev-tools --kubeconfig ~/.kube/dev-tools

Yields into following the error message:

Cluster status not active

When the cluster is updating: "status": "UPDATING",.

Steps:

  1. create cluster with older version
  2. update-cluster to newer version
  3. execute aws eks update-kubeconfig
  4. error

After looking into the source code it looks like this is because it only checks for the status "Active". Everything else is not allowed.

@ktugan
Copy link
Author

ktugan commented Feb 7, 2019

raise EKSClusterError("Cluster status not active")

Here the line of code causing this.

@justnance justnance self-assigned this Feb 7, 2019
@justnance
Copy link

@ktugan - Thank you for reaching out. Just so I understand your ask here. Are you looking for us to add a feature to improve the error messages?

@justnance justnance added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Feb 7, 2019
@ktugan
Copy link
Author

ktugan commented Feb 7, 2019

I should have stated my intentions clearer. From what I see there is two options to go forward:

  1. kubectl works normal and as intended when the cluster is in UPDATING
    -> I would change the code as following:
if self._cluster_description["status"] != "ACTIVE":  # current
-->
if self._cluster_description["status"] not in ["ACTIVE", "UPDATING"]:  # suggested
  1. kubectl doesn't work properly while the cluster is updating: keep it as it is

Hope that makes it clearer. Sorry for not taking the time to writing it properly the first time.

@justnance
Copy link

@ktugan - Thank you for your feedback and for pointing this out. I'm labeling this as a feature request and confusing error pending further review.

@justnance justnance added feature-request A feature should be added or improved. confusing-error and removed response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. labels Feb 8, 2019
@coryflucas
Copy link
Contributor

I opened PR #4036 to address this. I confirmed all the data from the describe cluster response is available during an upgrade so there doesn't appear to be any real reason this is not currently supported this other than the UPDATING status did not exist when the update-kubeconfig functionality was added.

@coryflucas
Copy link
Contributor

@justnance wondering if there is something I can do to get some review on the PR opened for this? Thanks!

@chadlwilson
Copy link

Any chance of getting the associated PR merged @justnance and team? This is highly unexpected behaviour.

We pipeline use of aws eks update-kubeconfig to assume an appropriate deployment role for use with kubectl and helm deploys in ephemeral deployment agents before triggering a deployment. it's really unexpected to have inability to do this for the N minutes a cluster is upgrading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confusing-error feature-request A feature should be added or improved.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants