Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_elasticache_replication_group: unable to import without destroying the cluster #16141

Closed
vthiery opened this issue Nov 11, 2020 · 2 comments
Labels
bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service.

Comments

@vthiery
Copy link
Contributor

vthiery commented Nov 11, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

❯ terraform version
Terraform v0.12.29
+ provider.aws v3.14.1

Affected Resource(s)

  • aws_elasticache_replication_group

Terraform Configuration Files

resource aws_elasticache_replication_group "my_cluster" {
  replication_group_id          = "my-cluster"
  replication_group_description = "my cluster"

  subnet_group_name  = aws_elasticache_subnet_group.private.name
  security_group_ids = [aws_security_group.redis.id]

  engine                     = "redis"
  engine_version             = "5.0.5"
  node_type                  = "cache.t3.micro"
  parameter_group_name       = "default.redis5.0"
  number_cache_clusters      = 1
  transit_encryption_enabled = true
  auth_token                 = data.sops_external.secrets.data.redis_my_cluster_password
}

Expected Behavior

After applying this configuration, I tried to remove it from the state, then re-import and plan. I expected the plan to do nothing.

Actual Behavior

After state rm and import, the plan reads:

Terraform will perform the following actions:

  # aws_elasticache_replication_group.my_cluster must be replaced
-/+ resource "aws_elasticache_replication_group" "my_cluster" {
      + apply_immediately              = (known after apply)
        at_rest_encryption_enabled     = false
      + auth_token                     = (sensitive value)
        auto_minor_version_upgrade     = true
        automatic_failover_enabled     = false
      + configuration_endpoint_address = (known after apply)
        engine                         = "redis"
        engine_version                 = "5.0.5"
      ~ id                             = "my-cluster" -> (known after apply)
      ~ maintenance_window             = "wed:05:30-wed:06:30" -> (known after apply)
      ~ member_clusters                = [
          - "my-cluster-001",
        ] -> (known after apply)
        node_type                      = "cache.t3.micro"
        number_cache_clusters          = 1
        parameter_group_name           = "default.redis5.0"
      - port                           = 6379 -> null
      ~ primary_endpoint_address       = "master.my-cluster.cuzztv.euc1.cache.amazonaws.com" -> (known after apply)
        replication_group_description  = "my cluster"
        replication_group_id           = "my-cluster"
        security_group_ids             = [
            "sg-0d9fe83c550fa69a6",
        ]
      ~ security_group_names           = [] -> (known after apply)
      - snapshot_retention_limit       = 0 -> null
      ~ snapshot_window                = "23:00-00:00" -> (known after apply)
        subnet_group_name              = "private-vthiery"
        transit_encryption_enabled     = true

      + cluster_mode {
          + num_node_groups         = (known after apply)
          + replicas_per_node_group = (known after apply)
        }

      - timeouts {}
    }

Plan: 1 to add, 0 to change, 1 to destroy.

On top of planning changes, the reason behind the forced replacement is not displayed. After digging a bit, we found out that the auth_token was set to null in the imported state. Manually editing the auth_token value in the state file did the trick and made the plan clean again (i.e. planning No changes).

Steps to Reproduce

  1. terraform apply
  2. terraform rm state aws_elasticache_replication_group.my_cluster
  3. terraform import aws_elasticache_replication_group.my_cluster my-cluster
  4. terraform plan
@ghost ghost added the service/elasticache Issues and PRs that pertain to the elasticache service. label Nov 11, 2020
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Nov 11, 2020
@breathingdust breathingdust added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Sep 16, 2021
@maur1th
Copy link
Contributor

maur1th commented Dec 9, 2021

#16203 fixed this (released in v3.67.0). Tested it on my end. Should probably close this issue as a result.

@vthiery vthiery closed this as completed Jun 30, 2023
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 31, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service.
Projects
None yet
Development

No branches or pull requests

3 participants