Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: aws_elasticache_replication_group transit_encryption_enable changes cause resource replacement #30700

Closed
jkoermer-eqxm opened this issue Apr 13, 2023 · 8 comments · Fixed by #30403
Labels
bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service.
Milestone

Comments

@jkoermer-eqxm
Copy link

Terraform Core Version

0.12.31

AWS Provider Version

4.62

Affected Resource(s)

aws_elasticache_replication_group

Expected Behavior

Updating the seeing for transit encryption enabled:
transit_encryption_enabled = true | false

Should just modify the cluster. Running any either of the following commands will modify the existing cluster and not require the cluster to be rebuilt:
aws elasticache modify-replication-group --replication-group-id cache-cluster --transit-encryption-enabled --transit-encryption-mode preferred --apply-immediately
aws elasticache modify-replication-group --replication-group-id cache-cluster--no-transit-encryption-enabled --apply-immediately

Actual Behavior

Running terraform plan or apply cause the resource to be recreated:
~ transit_encryption_enabled = true -> false # forces replacement

Modifying the transit encryption outside of terraform and updating terraform to match allows for the resource to plan/apply as expected.

Relevant Error/Panic Output Snippet

No response

Terraform Configuration Files

resource "aws_elasticache_replication_group" "redis" {
  count                         = var.clustered_cache ? 1 : 0

  lifecycle {
    # Comment out ignore_changes to force upgrade based on version change
    prevent_destroy = false
    ignore_changes = [engine_version]
  }

  automatic_failover_enabled    = local.automatic_failover_enabled
  multi_az_enabled              = local.multi_az_enabled
  replication_group_id          = "${var.owner}-${var.application}-${var.environment}${var.suffix}"
  description                   = "MultiAZ ${var.owner}-${var.application}-${var.environment}${var.suffix} Cluster"
  port                          = var.elasticache_port
  parameter_group_name          = aws_elasticache_parameter_group.parameter_group_redis.id

  node_type                     = var.node_type
  num_cache_clusters            = var.node_count
  num_node_groups               = var.num_node_groups
  replicas_per_node_group       = var.replicas_per_node_group

  snapshot_name                 = var.snapshot_name
  snapshot_retention_limit      = local.backup_days
  snapshot_window               = var.backup_window
  maintenance_window            = var.maintenance_window
  auto_minor_version_upgrade    = var.auto_minor_version_upgrade
  engine                        = "redis"
  engine_version                = var.engine_version
  security_group_ids            = [aws_security_group.redis.id]
  subnet_group_name             = local.subnet_group_name
  apply_immediately             = var.apply_immediately

  at_rest_encryption_enabled    = var.at_rest_encryption_enabled
  auth_token                    = var.auth_token      # This is required if transit_encryption_enabled is set
  kms_key_id                    = var.at_rest_encryption_enabled == true ? local.kms_key_id : null
  
  transit_encryption_enabled    = var.transit_encryption_enabled # MODIFY THIS VALUE 
  tags                     = local.tags
}

Steps to Reproduce

Create an elasticache instance.
Modify the value of transit_encryption_enabled

Debug Output

No response

Panic Output

No response

Important Factoids

No response

References

This is related to a couple of additional open issues relating to elasticache transit encryption. Including:
#29403
#26367
https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-cache-cluster.html

I think this gets resolved with this PR:
#30403

Would you like to implement a fix?

No

@jkoermer-eqxm jkoermer-eqxm added bug Addresses a defect in current functionality. needs-triage Waiting for first response or review from a maintainer. labels Apr 13, 2023
@github-actions
Copy link

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@github-actions github-actions bot added the service/elasticache Issues and PRs that pertain to the elasticache service. label Apr 13, 2023
@jkoermer-eqxm jkoermer-eqxm changed the title [Bug]: [Bug]: aws_elasticache_replication_group transit_encryption_enable changes cause resource replacement Apr 13, 2023
@justinretzolk justinretzolk removed the needs-triage Waiting for first response or review from a maintainer. label Apr 25, 2023
@teddy-wahle
Copy link

AWS docs say, "You can enable in-transit encryption on a cluster only when creating the cluster. You cannot toggle in-transit encryption on and off by modifying a cluster." which indicates to me that this is not a fixable or a bug.

@fjunejo-p
Copy link

AWS docs say, "You can enable in-transit encryption on a cluster only when creating the cluster. You cannot toggle in-transit encryption on and off by modifying a cluster." which indicates to me that this is not a fixable or a bug.

It is allowed for Elasticache Redis. AWS Docs.

@stefansundin
Copy link
Contributor

I didn't post a comment here but a few months ago I created this PR which fixes this issue: #30403

Add an upvote on the PR and hopefully hashicorp takes a look at it more quickly. :)

(GitHub automatically posts an "activity" item above but it isn't a comment so it is easy to miss.)

@scott-doyland-burrows
Copy link
Contributor

There also seems to be an option (at least on Redis 7.0.7) that allows encrytion in transit to be switched as per the screenshot:

image

If switching to Preferred the cluster modifies, and it is then possible to disable entirely if required.

The options in the console do not appear to be available in the AWS terraform provider.

Copy link

Warning

This issue has been closed, meaning that any additional comments are hard for our team to see. Please assume that the maintainers will not see them.

Ongoing conversations amongst community members are welcome, however, the issue will be locked after 30 days. Moving conversations to another venue, such as the AWS Provider forum, is recommended. If you have additional concerns, please open a new issue, referencing this one where needed.

@github-actions github-actions bot added this to the v5.47.0 milestone Apr 19, 2024
Copy link

This functionality has been released in v5.47.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 27, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.