-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] EKS - 1.18.0 - Configuration changes to Cluster which require replacement create cluster with random name causing CF Stack to be inconsistent #5259
Comments
This is indeed a bug. Updates that require replacement should not be allowed for CloudFormation resources that have explicit physical names. This is because it is impossible to create a new resource with the same name before deleting the old resource (which is how CloudFormation implements replacements). If you want to use the same explicit physical name for the new cluster, you will have to first rename the old cluster and then create a new cluster with the updated configuration. |
There were two causes of timeouts for EKS cluster creation: create time which is longer than the AWS Lambda timeout (15min) and lack of retry when applying kubectl after the cluster has been created. The change fixes the first issue by leveraging the custom resource provider framework to implement the cluster resource as an async resource. The custom resource providers are now bundled as nested stacks so they don't take up too many resources from users, and are also reused by multiple clusters within the same stack. This required that the creation role will not be the same as the lambda role, so we define this role separately and assume it within the providers. The second issue is fixed by adding 3 retries to "kubectl apply". **Backwards compatibility**: as described in #5544, since the resource provider handler of `Cluster` and `KubernetesResource` has been changed, this change requires a replacement of existing clusters (deployment fails with "service token cannot be changed" error). Since this can be disruptive to users, this change includes an exact copy of the previous version under a new module called `@aws-cdk/aws-eks-legacy`, which can be used as a drop-in replacement until users decide to upgrade to the new version. Using the legacy cluster will emit a synthesis warning that this module will no longer be released as part of the CDK starting March 1st, 2020. - Fixes #4087 - Fixes #4695 - Fixes #5259 - Fixes #5501 --- BREAKING CHANGE: (in experimental module) the providers behind the AWS EKS module have been rewritten to address multiple stability issues. Since this change requires cluster replacement, the old version of this module is available under `@aws-cdk/aws-eks-legacy`. Please read #5544 carefully for upgrade instructions.
There were two causes of timeouts for EKS cluster creation: create time which is longer than the AWS Lambda timeout (15min) and lack of retry when applying kubectl after the cluster has been created. The change fixes the first issue by leveraging the custom resource provider framework to implement the cluster resource as an async resource. The custom resource providers are now bundled as nested stacks so they don't take up too many resources from users, and are also reused by multiple clusters within the same stack. This required that the creation role will not be the same as the lambda role, so we define this role separately and assume it within the providers. The second issue is fixed by adding 3 retries to "kubectl apply". **Backwards compatibility**: as described in #5544, since the resource provider handler of `Cluster` and `KubernetesResource` has been changed, this change requires a replacement of existing clusters (deployment fails with "service token cannot be changed" error). Since this can be disruptive to users, this change includes an exact copy of the previous version under a new module called `@aws-cdk/aws-eks-legacy`, which can be used as a drop-in replacement until users decide to upgrade to the new version. Using the legacy cluster will emit a synthesis warning that this module will no longer be released as part of the CDK starting March 1st, 2020. - Fixes #4087 - Fixes #4695 - Fixes #5259 - Fixes #5501 --- BREAKING CHANGE: (in experimental module) the providers behind the AWS EKS module have been rewritten to address multiple stability issues. Since this change requires cluster replacement, the old version of this module is available under `@aws-cdk/aws-eks-legacy`. Please read #5544 carefully for upgrade instructions. Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Replacing a cluster with specified name cause the new cluster to be created with a random name. Trying to delete the stack fails throwing a ResourceNotFoundException
Reproduction Steps
before:
after:
Error Log
The update applies correctly, but the created cluster name is
cluster-RandomAlphaNumericString
Trying to delete the stack causes this:
Environment
Other
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: