Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when creating RDS MAZ cluster with custom parameter group #24721

Closed
cxystras opened this issue May 10, 2022 · 3 comments · Fixed by #25718
Closed

Error when creating RDS MAZ cluster with custom parameter group #24721

cxystras opened this issue May 10, 2022 · 3 comments · Fixed by #25718
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.

Comments

@cxystras
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform core version: 1.1.8
Terraform provider version: ~>4.10

Affected Resource(s)

  • aws_rds_cluster

RDS MAZ cluster (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html#multi-az-db-clusters-concepts-overview)

Terraform Configuration Files

main.tf snippet:

....

resource "aws_rds_cluster_parameter_group" "rds_maz_cluster_mysql_pg" {
  name        = "pg-rds-maz-cluster"
  family      = "mysql8.0"
  description = "rds multi az cluster parameter group"
}


resource "aws_rds_cluster" "RDS_MAZ_cluster-mysql" {
  cluster_identifier              = "rds-maz-cluster-mysql"
  availability_zones              = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  engine                          = "mysql"
  engine_version                  = "8.0.28"
  db_cluster_instance_class       = "db.m6gd.large"
  storage_type                    = "io1"
  allocated_storage               = 100
  iops                            = 1000
  db_subnet_group_name            = "db-subnetgroup"
  db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.rds_maz_cluster_mysql_pg.name
  storage_encrypted               = true
  skip_final_snapshot             = true
  master_username                 = "user"
  master_password                 = "password"
}

....

Debug Output

Error: Error waiting for RDS Cluster state to be "available": unexpected state 'rebooting', wanted target 'available'. last error: %!s()
with aws_rds_cluster.RDS_MAZ_cluster-mysql,
on main.tf line 153, in resource "aws_rds_cluster" "RDS_MAZ_cluster-mysql":
153: resource "aws_rds_cluster" "RDS_MAZ_cluster-mysql" {

Panic Output

N/A

Expected Behavior

Terraform should recognise the rebooting state as an expected one

Actual Behavior

Terraform apply breaks with the unexpected state.

Additional feedback

The reason why this happens is because when the RDS cluster's Parameter Group (PG) is set to a custom one the cluster needs to reboot for the change to be applied (static change). In this case (creating the RDS MAZ cluster with a custom PG) the cluster gets created and as soon as it becomes available it is rebooting, providing a different state back to terraform. Some might classify this as a race-condition. An easy fix would be to add additional logic to handle the rebooting state (additional timeout for waiting the reboot to be completed) and wait for the cluster to be set back to the available one, or add the rebooting state to the list of expected states.

Steps to Reproduce

  1. terraform apply

Important Factoids

N/A

References

Similar issues:
#2781 (hashicorp/terraform#4490) -> same resource, different target state

@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/rds Issues and PRs that pertain to the rds service. labels May 10, 2022
@justinretzolk justinretzolk added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels May 12, 2022
@nadivravivz
Copy link

Hi,

I'm having the same problem with
aws provider: hashicorp/aws v4.20.1
terraform provider: Terraform v1.2.2

This is the error I'm getting:
Error: Error waiting for RDS Cluster state to be "available": unexpected state 'rebooting', wanted target 'available'. last error: %!s()

@matharoo
Copy link
Contributor

matharoo commented Jul 6, 2022

I am getting same error when setting up a postgres rds cluster even with default parameter group.

rds cluster version I am trying to setup:

availability_zones = ["us-east-2a", "us-east-2b", "us-east-2c"]
engine = "postgres"
engine_mode = "provisioned"
engine_version = "13.4"
db_cluster_instance_class = "db.m5d.large"

my terraform version:

terraform {
  required_version = ">= 1.0.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 4.17.0"
    }
    null = {
      source  = "hashicorp/null"
      version = ">= 2.0"
    }
  }
}

Error during terraform apply:

Error: Error waiting for RDS Cluster state to be "available": unexpected state 'rebooting', wanted target 'available'. last error: %!s(<nil>)

I have a PR to fix this #25718

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.
Projects
None yet
4 participants