Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dynamodb: Replication regions are incompatible with resource policies in TableV2 #30705

Closed
VincentPheng opened this issue Jun 28, 2024 · 3 comments · Fixed by #31513
Closed
Labels
@aws-cdk/aws-dynamodb Related to Amazon DynamoDB bug This issue is a bug. effort/medium Medium work item – several days of effort p2

Comments

@VincentPheng
Copy link

VincentPheng commented Jun 28, 2024

Describe the bug

As stated in the AWS documentation for resource policies, we cannot create a replica and add a resource policy to that replica in the same stack update.

This means every time we want to add a new replication region in the CDK we need to somehow not add the resource policy to that new region for the initial deployment. TableV2 allows us to customize each replica, including whether or not we want a resource policy, but it too eagerly adds the resource policy under TableV2 construct and we are unable to override it with an undefined to not include any resourcePolicy.

Expected Behavior

When adding a new replica region that specifies an undefined resource policy, TableV2 should not add the resource policy defined within it's construct.

Current Behavior

TableV2 eagerly adds the resource policy to all replicas, even when specifying it should be undefined, and the deployment fails. The only way to add a new replica without adding a resource policy is to first deploy a stack update that creates the new replica and removes the resource policy from all other replicas. Then following up with a second stack update that re-adds the resource policy to all replicas. This means between the first and second update none of the tables will have a resource policy attached.

Reproduction Steps

Using a TableV2 WITHOUT replicas, but WITH the resource policy and this works

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      resourcePolicy: tablePolicyDocument,
    });

If I then try to deploy WITH a replica (us-east-1) and WITH a resource policy, it fails due same stack update error

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }],
      resourcePolicy: tablePolicyDocument,
    });

When I delete the stack and restart and try to deploy WITH a replica (us-east-1) but WITHOUT the resource policy, this succeeds

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }]
    });

When I then add the resource policy WITHOUT adding a new replica region, this works and the resource policy is added to both the table in us-west-2 and us-east-1

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }],
      resourcePolicy: tablePolicyDocument,
    });

But when I then try to add another region (us-east-2), it then fails with the same stack update error

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }, { region: 'us-east-2' }],
      resourcePolicy: tablePolicyDocument,
    });

If I try to override the resourcePolicy in the new replica to not include one, TableV2 still adds tablePolicyDocument to it in the CFN template and the deployment fails with the same stack update error

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }, { region: 'us-east-2', resourcePolicy: undefined }],
      resourcePolicy: tablePolicyDocument,
    });

I've so far been able to work around this by using the L1 escape hatch with CfnGlobalTable and manually specifying each replica and selectively adding the resource policy to each replica.

    new CfnGlobalTable(this, `MyTable-${stage}`, {
      tableName: 'MyTable',
      attributeDefinitions: [{ attributeName: 'key', attributeType: AttributeType.STRING }],
      keySchema: [{ attributeName: 'key', keyType: 'HASH' }],
      replicas: [
        {
          pointInTimeRecoverySpecification: {
            pointInTimeRecoveryEnabled: true,
          },
          region: 'us-west-2', // main table region
          resourcePolicy: { policyDocument: tablePolicyDocument },
        },
        {
          pointInTimeRecoverySpecification: {
            pointInTimeRecoveryEnabled: true,
          },
          region: 'us-east-1',
        },
      ],
      billingMode: 'PAY_PER_REQUEST',
      streamSpecification: { streamViewType: 'NEW_AND_OLD_IMAGES' },
    });

Possible Solution

The easiest solution is to update this line from:

const resourcePolicy = props.resourcePolicy ?? this.tableOptions.resourcePolicy;

to

const resourcePolicy = props.resourcePolicy

and require all replicas to manually include a resourcePolicy if one is desired.

Ideally, the construct could allow null and when null is specified in a specific replica, no resourcePolicy is added to that replica even when one is defined in the TableV2 itself

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      // us-east-2 should not have tablePolicyDocument added in the template
      replicas: [{ region: 'us-east-1' }, { region: 'us-east-2', resourcePolicy: null }], 
      resourcePolicy: tablePolicyDocument,
    });

Additional Information/Context

No response

CDK CLI Version

2.136.0

Framework Version

No response

Node.js Version

v18.18.2

OS

Linux

Language

TypeScript

Language Version

No response

Other information

No response

@VincentPheng VincentPheng added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Jun 28, 2024
@github-actions github-actions bot added the @aws-cdk/aws-dynamodb Related to Amazon DynamoDB label Jun 28, 2024
@pahud pahud self-assigned this Jul 1, 2024
@pahud
Copy link
Contributor

pahud commented Jul 1, 2024

Thank you for the report. It makes sense to me. We will discuss with @LeeroyHannigan about it.

Before that, we probably can work it around using

export class DummyStack extends Stack {
  readonly cluster: rds.DatabaseCluster;
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    const policy = new iam.PolicyDocument({
      statements: [
        new iam.PolicyStatement({
          actions: ['dynamodb:GetItem'],
          principals: [new iam.AccountRootPrincipal()],
          resources: ['*'],
        }),
      ],
    });

    const table = new ddb.TableV2(this, 'Table', {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: ddb.AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-west-1', }, { region: 'us-west-2', resourcePolicy: undefined }],
      resourcePolicy: policy,
    });

    const cfnglobaltable = table.node.defaultChild as ddb.CfnGlobalTable
    cfnglobaltable.addPropertyDeletionOverride('Replicas.1.ResourcePolicy');

  }
} 

We welcome PRs.

Copy link

Comments on closed issues and PRs are hard for our team to see.
If you need help, please open a new issue that references this one.

1 similar comment
Copy link

Comments on closed issues and PRs are hard for our team to see.
If you need help, please open a new issue that references this one.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 23, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
@aws-cdk/aws-dynamodb Related to Amazon DynamoDB bug This issue is a bug. effort/medium Medium work item – several days of effort p2
Projects
None yet
2 participants