-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
‼️ NOTICE: aws-eks "error retrieving RESTMappings to prune: invalid resource networking.k8s.io/v1" #15072
Comments
This is also a problem for us +1. Also happens with 1_20 on our real cluster |
+1 Same error on EKS cluster v1.18. Currently blocking deployment of K8s manifest/ YAML changes via CDK. |
The recent bump in the kubectl version from 1.20.0 to 1.21.0 broke KubernetesManifest updates. Marking this as a p1, given the other comments from people facing the same issue. |
|
We are preparing a patch release for this fix. Will update once available. |
version 1.110.1 was released with the patch. |
Out of curiosity, why isn't the kubectl handler version always configured to be the same version as the Kubernetes cluster? |
The recent [bump] in the kubectl version from 1.20.0 to 1.21.0 broke KubernetesManifest updates. Fixes aws#15072. [bump]: aws@c7f9f97 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Seeing |
@asgerjensen Can you please paste a code snippet showing how you’re creating your cluster? Did you provide a custom kubectl layer or using defaults? |
@robertd I just ran into this problem as well. Fargate EKS 1.23 built using CDK 2.53.0 Cluster looks as follows:
As you can see I supplied the matching kubectl layer for k8s 1.23. Nevertheless I keep seeing the error:
I have upgraded CDK to 2.55.0 and upgraded EKS to 1.24 and I saw the error again. @asgerjensen Did you make any progress? |
Please add your +1 👍 to let us know you have encountered this
Status: IN-PROGRESS
Overview:
Version 1.106.0 and later of the aws-eks construct library throw an error when trying to update a
KubernetesManifest
object, this includes objects used in thecluster.addManifest
method.Complete Error Message:
Workaround:
Downgrade to version 1.105.0 or below
Original opening post
When updating a KubernetesManifest, the deploy fails with an error like:
This issue occurs with Kubernetes versions 1.16, 1.17, and 1.20.
Reproduction Steps
This deploys successfully.
maxUnavailable: 1
tomaxUnavailable: 2
and deploy againThis results in the error above.
What did you expect to happen?
I would have expected the deploy to have succeeded, and updated the
maxUnavailable
field in the deployed Manifest from 1 to 2.What actually happened?
Environment
Other
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: