Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

redshift cluster restore from snapshot misconfigures cluster for parameters number_of_nodes and kms_key #13176

Closed
imdhruva opened this issue May 6, 2020 · 5 comments · Fixed by #13203
Labels
bug Addresses a defect in current functionality. service/redshift Issues and PRs that pertain to the redshift service.
Milestone

Comments

@imdhruva
Copy link
Contributor

imdhruva commented May 6, 2020

Facing a similar issue here. I am trying to create a redshift cluster from snapshot. the restore from snapshot option enables me to change the existing 7 node ds2 based cluster to a 2 node ra3.4xl cluster. However, terraform seems to ignores all the configs and

  • spins up a new 7 node ra3.4xl cluster.
  • enable kms encryption on the cluster

Terraform Version

Terraform v0.12.12
+ provider.aws v2.60.0

Affected Resource(s)

  • aws_redshift_cluster

Terraform Configuration Files

resource "aws_redshift_cluster" "main" {
  cluster_identifier                  = "xxx-redshift-cluster"
  database_name                       = "dev" # explicitly defining this to default to avoid confusion
  node_type                           = "ra3.4xlarge"
  cluster_subnet_group_name           = "sdata-redshift"
  preferred_maintenance_window        = "sun:06:25-sun:06:55"
  automated_snapshot_retention_period = 1
  number_of_nodes                     = 2
  publicly_accessible                 = false
  snapshot_identifier                 = "rs:xxxxxxxxxxxxxx"
  snapshot_cluster_identifier         = "xxxxxxxxxxxxxx"
  final_snapshot_identifier           = "xxx-redshift-cluster-snapshot-final"
  vpc_security_group_ids              = ["sg-beaa1bc7"]
  tags = {
    vpc   = "xxx"
    role  = "redshift"
    class = "cluster"
  }
}

Debug Output

https://gist.github.com/imdhruva/91911af2f501fe319337d2253481edca

Expected Behavior

aws_redshift_cluster with 2 nodes and no kms enabled

Actual Behavior

aws_redshift_cluster with 7 nodes and kms enabled

Steps to Reproduce

terraform init
terraform apply

References

#11367

@ghost ghost added the service/redshift Issues and PRs that pertain to the redshift service. label May 6, 2020
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label May 6, 2020
@andy68-bris
Copy link

I can confirm I see the same behavior going the other way.
Current 7 node ds2 based cluster.
Terraform config to create a new 20 node ra3.4xl based on ds2 snapshot actually creates a 7 node ra3.4xl cluster.
Subsequent plan then detects a change from 7 nodes to 20. However I think this would trigger a classic resize rather than an elastic one.

@dpacaud
Copy link

dpacaud commented Jun 4, 2020

This just bit us quite hard by spawning a 29 node cluster instead of 5 :(

@rednuht
Copy link

rednuht commented Nov 4, 2020

We are also experiencing that tags are not set when restoring from a snapshot. A second apply afterwards will correct this but when restoring from a snapshot of our main cluster's size it takes many hours... In our case that means that the cost allocation is completely wrong for a huge cluster for many hours, totally unacceptable!

@breathingdust breathingdust added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Sep 21, 2021
@github-actions github-actions bot added this to the v4.9.0 milestone Mar 29, 2022
@github-actions
Copy link

github-actions bot commented Apr 7, 2022

This functionality has been released in v4.9.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

github-actions bot commented May 8, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 8, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/redshift Issues and PRs that pertain to the redshift service.
Projects
None yet
5 participants