-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] Missing unit tests for EKS custom resource #4695
Labels
Comments
eladb
added
bug
This issue is a bug.
needs-triage
This issue or PR still needs to be triaged.
@aws-cdk/aws-eks
Related to Amazon Elastic Kubernetes Service
p0
and removed
needs-triage
This issue or PR still needs to be triaged.
labels
Oct 26, 2019
eladb
pushed a commit
that referenced
this issue
Nov 30, 2019
There were two causes of timeouts for EKS cluster creation: create time which is longer than the AWS Lambda timeout (15min) and lack of retry when applying kubectl after the cluster has been created. The change fixes the first issue by leveraging the custom resource provider framework to implement the cluster resource as an async resource. The second issue is fixed by adding 3 retries to "kubectl apply". Fixes #4087 Fixes #4695
8 tasks
eladb
pushed a commit
that referenced
this issue
Dec 30, 2019
There were two causes of timeouts for EKS cluster creation: create time which is longer than the AWS Lambda timeout (15min) and lack of retry when applying kubectl after the cluster has been created. The change fixes the first issue by leveraging the custom resource provider framework to implement the cluster resource as an async resource. The custom resource providers are now bundled as nested stacks so they don't take up too many resources from users, and are also reused by multiple clusters within the same stack. This required that the creation role will not be the same as the lambda role, so we define this role separately and assume it within the providers. The second issue is fixed by adding 3 retries to "kubectl apply". **Backwards compatibility**: as described in #5544, since the resource provider handler of `Cluster` and `KubernetesResource` has been changed, this change requires a replacement of existing clusters (deployment fails with "service token cannot be changed" error). Since this can be disruptive to users, this change includes an exact copy of the previous version under a new module called `@aws-cdk/aws-eks-legacy`, which can be used as a drop-in replacement until users decide to upgrade to the new version. Using the legacy cluster will emit a synthesis warning that this module will no longer be released as part of the CDK starting March 1st, 2020. - Fixes #4087 - Fixes #4695 - Fixes #5259 - Fixes #5501 --- BREAKING CHANGE: (in experimental module) the providers behind the AWS EKS module have been rewritten to address multiple stability issues. Since this change requires cluster replacement, the old version of this module is available under `@aws-cdk/aws-eks-legacy`. Please read #5544 carefully for upgrade instructions.
mergify bot
added a commit
that referenced
this issue
Dec 30, 2019
There were two causes of timeouts for EKS cluster creation: create time which is longer than the AWS Lambda timeout (15min) and lack of retry when applying kubectl after the cluster has been created. The change fixes the first issue by leveraging the custom resource provider framework to implement the cluster resource as an async resource. The custom resource providers are now bundled as nested stacks so they don't take up too many resources from users, and are also reused by multiple clusters within the same stack. This required that the creation role will not be the same as the lambda role, so we define this role separately and assume it within the providers. The second issue is fixed by adding 3 retries to "kubectl apply". **Backwards compatibility**: as described in #5544, since the resource provider handler of `Cluster` and `KubernetesResource` has been changed, this change requires a replacement of existing clusters (deployment fails with "service token cannot be changed" error). Since this can be disruptive to users, this change includes an exact copy of the previous version under a new module called `@aws-cdk/aws-eks-legacy`, which can be used as a drop-in replacement until users decide to upgrade to the new version. Using the legacy cluster will emit a synthesis warning that this module will no longer be released as part of the CDK starting March 1st, 2020. - Fixes #4087 - Fixes #4695 - Fixes #5259 - Fixes #5501 --- BREAKING CHANGE: (in experimental module) the providers behind the AWS EKS module have been rewritten to address multiple stability issues. Since this change requires cluster replacement, the old version of this module is available under `@aws-cdk/aws-eks-legacy`. Please read #5544 carefully for upgrade instructions. Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
iliapolo
changed the title
Missing unit tests for EKS custom resource
[aws-eks] Missing unit tests for EKS custom resource
Aug 16, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
The custom resource is complex and we currently don't have a good unit test harness to validate the various cases and ensure there is no regression.
As soon as we have the AsyncCustomResource framework we will migrate the EKS custom resource to TypeScript and add a unit test framework for it.
The text was updated successfully, but these errors were encountered: