-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CDK fails to delete old global table upon id change #7189
Labels
Comments
SachinShekhar
added
bug
This issue is a bug.
needs-triage
This issue or PR still needs to be triaged.
labels
Apr 6, 2020
SachinShekhar
changed the title
Fails to delete old global table upon id change
CDK fails to delete old global table upon id change
Apr 6, 2020
RomainMuller
added
p1
and removed
needs-triage
This issue or PR still needs to be triaged.
labels
Apr 14, 2020
I suppose it's because the old permissions either get removed or updated away before the delete attempt begins. Maybe we can make this work by changing how the permissions are added, or by tuning around dependencies. |
RomainMuller
added a commit
that referenced
this issue
May 27, 2020
The permissions required to clean up old DynamoDB Global Tables replicas were set up in such a way that removing a replication region, or dropping replication entirely (or when cuasing a table replacement), they were removed before CloudFormation gets to the `CLEAN_UP` phase, causing a clean up failure (and old tables would remain there). This changes the way permissions are granted to the replication handler resource so that they are added using a separate `iam.Policy` resource, so that deleted permissions are also removed during the `CLEAN_UP` phase after the resources depending on them have been deleted. The tradeoff is that two additional resources are added to the stack that defines the DynamoDB Global Tables, where previously those permissions were mastered in the nested stack that holds the replication handler. Unofrtunately, the nested stack gets it's `CLEAN_UP` phase executed as part of the nested stack resource update, not during it's parent stack's `CLEAN_UP` phase. Fixes #7189
mergify bot
pushed a commit
that referenced
this issue
Jun 8, 2020
The permissions required to clean up old DynamoDB Global Tables replicas were set up in such a way that removing a replication region, or dropping replication entirely (or when causing a table replacement), they were removed before CloudFormation gets to the `CLEAN_UP` phase, causing a clean up failure (and old tables would remain there). This changes the way permissions are granted to the replication handler resource so that they are added using a separate `iam.Policy` resource, so that deleted permissions are also removed during the `CLEAN_UP` phase after the resources depending on them have been deleted. The tradeoff is that two additional resources are added to the stack that defines the DynamoDB Global Tables, where previously those permissions were mastered in the nested stack that holds the replication handler. Unofrtunately, the nested stack gets it's `CLEAN_UP` phase executed as part of the nested stack resource update, not during it's parent stack's `CLEAN_UP` phase. Fixes #7189 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When I change
id
of a Dynamodb Global Table construct with removal policyDESTROY
, CDK fails to remove old Global Table.Note: I am NOT talking about deprecated
aws-dynamodb-global
. I am talking about new experimentalaws-dynamodb.Table.replicationRegions
.Reproduction Steps
It'll create a new table with proper replicas, but fail to delete old table. The old table and its replica will remain in the account even after you destroy the stack (meaning it gets completely detached with the stack).
Error Log
The errors are visible in deployment logs.
Error 1 (appears 6 times for 2 replica regions):
Error 2 (appears 3 times for 2 replica regions):
Note: I have replaced private info in errors with
<xyz>
.Environment
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: