Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR when creating an Aurora RDS global cluster and snapshot_identifier is defined #10965

Closed
jamengual opened this issue Nov 21, 2019 · 13 comments · Fixed by #14487
Closed

ERROR when creating an Aurora RDS global cluster and snapshot_identifier is defined #10965

jamengual opened this issue Nov 21, 2019 · 13 comments · Fixed by #14487
Assignees
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/rds Issues and PRs that pertain to the rds service.
Milestone

Comments

@jamengual
Copy link

jamengual commented Nov 21, 2019

Hi.

I have been working with terraform to create RDS global cluster without many issues until now.

I'm using the same code I use to create my prod global cluster to create another cluster base on the original prod cluster snapshot but when snapshot_identifier is provided the cluster gets created as a regional cluster and it is not attached to the newly created global cluster BUT if I use exactly the same code without specifing the snapshot_identifier the global cluster is created and the new regional rds cluster gets atached inmediatly to the global cluster.

Exactly the sam behavior happens when using the console but in the console I can successfully create the global cluster from the snapshot.

Keep in mind that I replaced some text to hide personal information

the sample code :

# Global mydata RDS cluster

resource "aws_rds_global_cluster" "mydata_clone" {
  count                     = var.create_clone ? 1 : 0
  engine_version            = "5.6.10a"
  global_cluster_appntifier = "clone-test-mydata-global"
  storage_encrypted         = true
  deletion_protection       = false
  provider                  = aws.primary
}


module "test_mydata_us_east_2_clone_cluster" {
  source         = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=0.17.0"
  enabled        = var.create_clone
  engine         = "aurora"
  engine_version = "5.6.10a"
  cluster_family = "aurora5.6"
  cluster_size   = 1
  namespace      = var.namespace
  stage          = var.environment
  name           = "us-east-2-${var.mydata_name}-clone"
#   admin_user     = var.mydata_db_user
#   admin_password = random_string.db_password.result
  db_name        = var.mydata_db_name
  instance_type  = "db.r5.2xlarge"
  vpc_id         = local.vpc_id
  security_groups = [
    local.sg-web-server-us-east-2,
    local.sg-app-scan-us-east-2,
    local.sg-app-scan-us-east-2,
    local.sg-app-scan-us-east-2
  ]
  allowed_cidr_blocks                 = var.mydata_allowed_cidr_blocks
  subnets                             = local.private_subnet_ids
  engine_mode                         = "global"
  global_cluster_appntifier           = join("", aws_rds_global_cluster.mydata_clone.*.id)
  iam_database_authentication_enabled = true
  storage_encrypted                   = true
  deletion_protection                 = false
  iam_roles                           = ["${aws_iam_role.AuroraAccessToDataBuckets.arn}"]
  ##enabled_cloudwatch_logs_exports     = ["audit", "error", "general", "slowquery"]
  tags                                = local.complete_tags
  snapshot_appntifier                 = var.snapshot_appntifier
  skip_final_snapshot                 = true

  # DNS setting
  cluster_dns_name = "test-${var.environment}-mydata-writer-clone-us-east-2"
  reader_dns_name  = "test-${var.environment}-mydata-reader-clone-us-east-2"
  zone_id          = data.aws_route53_zone.ds_example_com.zone_id

  # enable monitoring every 30 seconds
  ##rds_monitoring_interval = 15

  # reference iam role created above
  ##rds_monitoring_role_arn      = aws_iam_role.mydata_enhanced_monitoring.arn
  ##performance_insights_enabled = true

  cluster_parameters = [
    {
      name         = "binlog_format"
      value        = "row"
      apply_method = "pending-reboot"
    },
    {
      apply_method = "immediate"
      name         = "max_allowed_packet"
      value        = "16777216"
    },
    {
      apply_method = "pending-reboot"
      name         = "performance_schema"
      value        = "1"
    },
    {
      apply_method = "immediate"
      name         = "server_audit_logging"
      value        = "0"
    }
  ]
  providers = {
    aws = aws.primary
  }
}

Plan output :

    + resource "aws_rds_global_cluster" "mydata_clone" {
        + arn                        = (known after apply)
        + deletion_protection        = false
        + engine                     = "aurora"
        + engine_version             = "5.6.10a"
        + global_cluster_identifier  = "clone-test-mydata-global"
        + global_cluster_resource_id = (known after apply)
        + id                         = (known after apply)
        + storage_encrypted          = true
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_db_parameter_group.default[0] will be created
    + resource "aws_db_parameter_group" "default" {
        + arn         = (known after apply)
        + description = "DB instance parameter group"
        + family      = "aurora5.6"
        + id          = (known after apply)
        + name        = "test-staging-us-east-2-mydata-clone"
        + name_prefix = (known after apply)
        + tags        = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_db_subnet_group.default[0] will be created
    + resource "aws_db_subnet_group" "default" {
        + arn         = (known after apply)
        + description = "Allowed subnets for DB cluster instances"
        + id          = (known after apply)
        + name        = "test-staging-us-east-2-mydata-clone"
        + name_prefix = (known after apply)
        + subnet_ids  = [
            + "subnet-1111111111111
            + "subnet-1111111111111
            + "subnet-1111111111111
          ]
        + tags        = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_rds_cluster.default[0] will be created
    + resource "aws_rds_cluster" "default" {
        + apply_immediately                   = true
        + arn                                 = (known after apply)
        + availability_zones                  = (known after apply)
        + backup_retention_period             = 5
        + cluster_identifier                  = "test-staging-us-east-2-mydata-clone"
        + cluster_identifier_prefix           = (known after apply)
        + cluster_members                     = (known after apply)
        + cluster_resource_id                 = (known after apply)
        + copy_tags_to_snapshot               = false
        + database_name                       = "testdb"
        + db_cluster_parameter_group_name     = "test-staging-us-east-2-mydata-clone"
        + db_subnet_group_name                = "test-staging-us-east-2-mydata-clone"
        + deletion_protection                 = false
        + enabled_cloudwatch_logs_exports     = []
        + endpoint                            = (known after apply)
        + engine                              = "aurora"
        + engine_mode                         = "global"
        + engine_version                      = "5.6.10a"
        + final_snapshot_identifier           = "test-staging-us-east-2-mydata-clone"
        + global_cluster_identifier           = (known after apply)
        + hosted_zone_id                      = (known after apply)
        + iam_database_authentication_enabled = true
        + iam_roles                           = [
            + "arn:aws:iam::1111111111:role/AuroraAccessToDataBuckets",
          ]
        + id                                  = (known after apply)
        + kms_key_id                          = (known after apply)
        + master_username                     = "admin"
        + port                                = (known after apply)
        + preferred_backup_window             = "07:00-09:00"
        + preferred_maintenance_window        = "wed:03:00-wed:04:00"
        + reader_endpoint                     = (known after apply)
        + skip_final_snapshot                 = true
        + snapshot_identifier                 = "snapshot-prep-for-data-load"
        + storage_encrypted                   = true
        + tags                                = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
        + vpc_security_group_ids              = (known after apply)
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_rds_cluster_instance.default[0] will be created
    + resource "aws_rds_cluster_instance" "default" {
        + apply_immediately               = (known after apply)
        + arn                             = (known after apply)
        + auto_minor_version_upgrade      = true
        + availability_zone               = (known after apply)
        + cluster_identifier              = (known after apply)
        + copy_tags_to_snapshot           = false
        + db_parameter_group_name         = "test-staging-us-east-2-mydata-clone"
        + db_subnet_group_name            = "test-staging-us-east-2-mydata-clone"
        + dbi_resource_id                 = (known after apply)
        + endpoint                        = (known after apply)
        + engine                          = "aurora"
        + engine_version                  = "5.6.10a"
        + id                              = (known after apply)
        + identifier                      = "test-staging-us-east-2-mydata-clone-1"
        + identifier_prefix               = (known after apply)
        + instance_class                  = "db.r5.2xlarge"
        + kms_key_id                      = (known after apply)
        + monitoring_interval             = 0
        + monitoring_role_arn             = (known after apply)
        + performance_insights_enabled    = false
        + performance_insights_kms_key_id = (known after apply)
        + port                            = (known after apply)
        + preferred_backup_window         = (known after apply)
        + preferred_maintenance_window    = (known after apply)
        + promotion_tier                  = 0
        + publicly_accessible             = false
        + storage_encrypted               = (known after apply)
        + tags                            = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
        + writer                          = (known after apply)
      }

Version :
terraform_0.12.16
provider "local" (hashicorp/local) 1.4.0
provider "aws" (hashicorp/aws) 2.38.0...
provider "null" (hashicorp/null) 2.1.2...
provider "template" (hashicorp/template) 2.1.2
provider "mysql" (terraform-providers/mysql) 1.9.0
provider "random" (hashicorp/random) 2.2.1

Expected Behavior

new global cluster should be created and a new rds cluster should have been attached to the global cluster after being created from the snapshot.

Actual Behavior

A global cluster is created and a standalone RDS cluster is created from the snapshot but the RDS cluster is not attached to the Global cluster

When created without using snapshot_identifier the global cluster an RDS clusters are created correctly.

the error when trying to re-apply the terraform is :

Error: Existing RDS Clusters cannot be added to an existing RDS Global Cluster

@ghost ghost added the service/rds Issues and PRs that pertain to the rds service. label Nov 21, 2019
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Nov 21, 2019
@jamengual
Copy link
Author

Hi, anything I can help for someone to take a look at this ?

@jamengual jamengual changed the title Inconsitancy when creating Aurora RDS global cluster when snapshot_identifier is defined ERROR when creating an Aurora RDS global cluster and snapshot_identifier is defined Dec 2, 2019
@marinsalinas
Copy link

Hi, anything I can help for someone to take a look at this ?

Maybe @bflad since he did last commit on RDS resources.

@bflad
Copy link
Contributor

bflad commented Apr 24, 2020

Hi folks 👋 My focus is elsewhere at the moment, but in brief searching around this topic:

This represents a little bit of a problem in Terraform though as it'll introduce a circular reference (to do it properly):

# Potential implementation
# The source_db_cluster_identifier argument does not currently exist
resource "aws_rds_global_cluster" "example" {
  # ... other configuration ...
  source_db_cluster_identifier = aws_rds_cluster.example.id
}

resource "aws_rds_cluster" "example" {
  snapshot_identifier = aws_db_cluster_snapshot.example.id

  # NOTE: Due to restoring this Cluster from Cluster Snapshot
  # and using this Cluster to create a Global Cluster, the
  # global_cluster_identifier attribute will become populated and
  # Terraform will begin showing it as a difference. We cannot do:
  # global_cluster_identifier = aws_rds_global_cluster.example.id
  # as it introduces a circular reference. As a potential workaround:
  lifecycle {
    ignore_changes = [global_cluster_identifier]
  }
}

This workaround would need to be documented to reduce potential confusion around the subject. Another option would be to mark the global_cluster_identifier argument in the aws_rds_cluster resource as Computed: true and annotating the documentation with:

* global_cluster_identifier - (Optional) The global cluster identifier specified on [`aws_rds_global_cluster`](/docs/providers/aws/r/rds_global_cluster.html). Terraform will only perform drift detection if this argument has a value. If using this cluster to restore a snapshot to create a global cluster, this should be omitted to prevent a circular reference.

Hope this helps.

EDIT: Please note that this is marked as an enhancement since this is a feature request to support the source_db_cluster_identifier argument on the aws_rds_global_cluster resource, which appears to be how RDS expects this to be implemented.

@bflad bflad added enhancement Requests to existing resources that expand the functionality or scope. and removed needs-triage Waiting for first response or review from a maintainer. labels Apr 24, 2020
@marinsalinas
Copy link

Hi folks 👋 My focus is elsewhere at the moment, but in brief searching around this topic:

This represents a little bit of a problem in Terraform though as it'll introduce a circular reference (to do it properly):

# Potential implementation
# The source_db_cluster_identifier argument does not currently exist
resource "aws_rds_global_cluster" "example" {
  # ... other configuration ...
  source_db_cluster_identifier = aws_rds_cluster.example.id
}

resource "aws_rds_cluster" "example" {
  snapshot_identifier = aws_db_cluster_snapshot.example.id

  # NOTE: Due to restoring this Cluster from Cluster Snapshot
  # and using this Cluster to create a Global Cluster, the
  # global_cluster_identifier attribute will become populated and
  # Terraform will begin showing it as a difference. We cannot do:
  # global_cluster_identifier = aws_rds_global_cluster.example.id
  # as it introduces a circular reference. As a potential workaround:
  lifecycle {
    ignore_changes = [global_cluster_identifier]
  }
}

This workaround would need to be documented to reduce potential confusion around the subject. Another option would be to mark the global_cluster_identifier argument in the aws_rds_cluster resource as Computed: true and annotating the documentation with:

* global_cluster_identifier - (Optional) The global cluster identifier specified on [`aws_rds_global_cluster`](/docs/providers/aws/r/rds_global_cluster.html). Terraform will only perform drift detection if this argument has a value. If using this cluster to restore a snapshot to create a global cluster, this should be omitted to prevent a circular reference.

Hope this helps.

EDIT: Please note that this is marked as an enhancement since this is a feature request to support the source_db_cluster_identifier argument on the aws_rds_global_cluster resource, which appears to be how RDS expects this to be implemented.

Thanks @bflad, I have another question, how this will work with multiple secondary clusters?

@bflad
Copy link
Contributor

bflad commented May 15, 2020

Hi @marinsalinas 👋 I'm not sure I understand the question, could you elaborate?

@marinsalinas
Copy link

Hey, @bflad, I can provide more explanation on this:

Since the potential implementation is:

# Potential implementation
# The source_db_cluster_identifier argument does not currently exist
resource "aws_rds_global_cluster" "example" {
  # ... other configuration ...
  source_db_cluster_identifier = aws_rds_cluster.example.id
}

resource "aws_rds_cluster" "example" {
  snapshot_identifier = aws_db_cluster_snapshot.example.id

  # NOTE: Due to restoring this Cluster from Cluster Snapshot
  # and using this Cluster to create a Global Cluster, the
  # global_cluster_identifier attribute will become populated and
  # Terraform will begin showing it as a difference. We cannot do:
  # global_cluster_identifier = aws_rds_global_cluster.example.id
  # as it introduces a circular reference. As a potential workaround:
  lifecycle {
    ignore_changes = [global_cluster_identifier]
  }
}

How will we add secondary clusters into the configuration?

I'm thinking something like:

# Potential implementation
# The source_db_cluster_identifier argument does not currently exist
resource "aws_rds_global_cluster" "example" {
  # ... other configuration ...
  source_db_cluster_identifier = aws_rds_cluster.example.id
}

resource "aws_rds_cluster" "example" {
  snapshot_identifier = aws_db_cluster_snapshot.example.id

  # NOTE: Due to restoring this Cluster from Cluster Snapshot
  # and using this Cluster to create a Global Cluster, the
  # global_cluster_identifier attribute will become populated and
  # Terraform will begin showing it as a difference. We cannot do:
  # global_cluster_identifier = aws_rds_global_cluster.example.id
  # as it introduces a circular reference. As a potential workaround:
  lifecycle {
    ignore_changes = [global_cluster_identifier]
  }
}

provider "aws" {
  region = "us-west-2"
  alas   = "uw2"
}

//My point of view and how to add secondary clusters.
resource "aws_rds_cluster" "secondary" {
  provider                  = aws.uw2
  global_cluster_identifier = aws_rds_global_cluster.example.id

  depends_on = [aws_rds_cluster.example]
}

is that correct?

@bflad
Copy link
Contributor

bflad commented May 17, 2020

@marinsalinas that would be the expected configuration, potentially without the ignore_changes depending on the final implementation 👍 (you could also drop the depends_on, its transitively aws_rds_cluster.secondary -> aws_rds_global_cluster.example -> aws_rds_cluster.example already).

@ghost
Copy link

ghost commented Jun 15, 2020

I have the exact same problem.

When i create a global cluster from scratch everythings works but this is not what i need to do.

I have an existing DB which need to be replicated in another region.
With Terraform after many tests no way to do this.
Last error : Existing RDS Clusters cannot be added to an existing RDS Global Cluster

I see at least 2 ways:

  • Create a new global cluster and attach existing DB to it
  • Create a new global cluster and a new DB from existing Snapshot

First way is what AWS Console do via 'add region'.

I think @bflad has pointed the main problem : Terraform AWS provider actually miss SourceDBClusterIdentifier parameter to connect the global cluster with an existing cluster.
You can create a global database that is initially empty, and then add a primary cluster and a secondary cluster to it. Or you can specify an existing Aurora cluster during the create operation, and this cluster becomes the primary cluster of the global database.

@phani308
Copy link

@bflad Hello ,

I am facing a similar issue when trying to attach the existing RDS cluster to the Global cluster. terraform errors out as follow.
Error: Existing RDS Clusters cannot be added to an existing RDS Global Cluster

AWS does offer API/CLI/SDK ways to attach and existing regional cluster to Global while creating the Global with "--source-db-cluster-identifier". It would be great if terraform can release this enhancement/bug to support attaching existing DB regional clusters to Global.

@bflad
Copy link
Contributor

bflad commented Aug 6, 2020

Enhancement submitted: #14487

bflad added a commit that referenced this issue Aug 6, 2020
…ter_identifier arguments, add global_cluster_members attribute

Reference: #10965

Output from acceptance testing:

```
--- PASS: TestAccAWSRdsGlobalCluster_disappears (11.01s)
--- PASS: TestAccAWSRdsGlobalCluster_Engine_Aurora (14.06s)
--- PASS: TestAccAWSRdsGlobalCluster_EngineVersion_AuroraPostgresql (14.18s)
--- PASS: TestAccAWSRdsGlobalCluster_basic (14.27s)
--- PASS: TestAccAWSRdsGlobalCluster_EngineVersion_AuroraMySQL (14.55s)
--- PASS: TestAccAWSRdsGlobalCluster_EngineVersion_Aurora (14.72s)
--- PASS: TestAccAWSRdsGlobalCluster_DeletionProtection (21.99s)
--- PASS: TestAccAWSRdsGlobalCluster_DatabaseName (23.70s)
--- PASS: TestAccAWSRdsGlobalCluster_StorageEncrypted (25.16s)
--- PASS: TestAccAWSRdsGlobalCluster_SourceDbClusterIdentifier (168.11s)
```
@bflad bflad added this to the v3.1.0 milestone Aug 6, 2020
bflad added a commit that referenced this issue Aug 6, 2020

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
…ter_identifier arguments, add global_cluster_members attribute (#14487)

Reference: #10965

Output from acceptance testing:

```
--- PASS: TestAccAWSRdsGlobalCluster_disappears (11.01s)
--- PASS: TestAccAWSRdsGlobalCluster_Engine_Aurora (14.06s)
--- PASS: TestAccAWSRdsGlobalCluster_EngineVersion_AuroraPostgresql (14.18s)
--- PASS: TestAccAWSRdsGlobalCluster_basic (14.27s)
--- PASS: TestAccAWSRdsGlobalCluster_EngineVersion_AuroraMySQL (14.55s)
--- PASS: TestAccAWSRdsGlobalCluster_EngineVersion_Aurora (14.72s)
--- PASS: TestAccAWSRdsGlobalCluster_DeletionProtection (21.99s)
--- PASS: TestAccAWSRdsGlobalCluster_DatabaseName (23.70s)
--- PASS: TestAccAWSRdsGlobalCluster_StorageEncrypted (25.16s)
--- PASS: TestAccAWSRdsGlobalCluster_SourceDbClusterIdentifier (168.11s)
```
@bflad
Copy link
Contributor

bflad commented Aug 6, 2020

Hi folks 👋 Support for new force_destroy and source_db_cluster_identifier arguments in the aws_rds_global_cluster resource have been merged and will release with version 3.1.0 of the Terraform AWS Provider, likely later today. The 3.1.0 and later resource documentation will include a quick example of how to utilize these to create a Global Cluster from an existing DB Cluster. 👍

@ghost
Copy link

ghost commented Aug 7, 2020

This has been released in version 3.1.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

@ghost
Copy link

ghost commented Sep 6, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Sep 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/rds Issues and PRs that pertain to the rds service.
Projects
None yet
4 participants