Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform does not handle updating a PostgreSQL Server Replica #10284

Closed
Djiit opened this issue Jan 22, 2021 · 2 comments · Fixed by #10754
Closed

Terraform does not handle updating a PostgreSQL Server Replica #10284

Djiit opened this issue Jan 22, 2021 · 2 comments · Fixed by #10754

Comments

@Djiit
Copy link
Contributor

Djiit commented Jan 22, 2021

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

❯ terraform -v
Terraform v0.13.5

Using azurerm v2.42.0

Affected Resource(s)

  • azurerm_postgresql_server

Terraform Configuration Files

I use this tiny custom module :

resource "azurerm_postgresql_server" "postgres_master" {
  name                             = var.main_db_name
  location                         = var.main_location
  resource_group_name              = var.resource_group
  sku_name                         = var.sku_name
  administrator_login              = var.administrator_login
  administrator_login_password     = var.administrator_login_password
  version                          = var.postgres_version
  ssl_enforcement_enabled          = true
  ssl_minimal_tls_version_enforced = "TLS1_2"
  backup_retention_days            = var.retention_days
  geo_redundant_backup_enabled     = var.replicas_count != 0 ? true : false
  storage_mb                       = var.storage_mb
  public_network_access_enabled    = var.public_access_enabled
  auto_grow_enabled                = true

  lifecycle {
    ignore_changes = [
      # Autogrow is enabled
      storage_mb,
    ]
  }

  tags = var.tags
}

resource "azurerm_postgresql_server" "postgres_standby" {
  count                            = var.replicas_count
  name                             = "${azurerm_postgresql_server.postgres_master.name}-r-${var.replicas_count}"
  location                         = var.replicas_location
  resource_group_name              = var.resource_group
  sku_name                         = var.sku_name
  version                          = var.postgres_version
  ssl_enforcement_enabled          = true
  ssl_minimal_tls_version_enforced = "TLS1_2"
  storage_mb                       = var.storage_mb
  public_network_access_enabled    = var.public_access_enabled
  create_mode                      = "Replica"
  creation_source_server_id        = azurerm_postgresql_server.postgres_master.id
  auto_grow_enabled                = true

  lifecycle {
    ignore_changes = [
      # Autogrow is enabled
      storage_mb,
    ]
  }

  tags = var.tags
}

Debug Output

Changing, for instance, the SKU, fails the apply (the plan is OK) with this message ;

Error: waiting for update of PostgreSQL Server "redacted (Resource Group "redacted"): Code="ReplicationInvalidMaxConnections" Message="The operation could not be completed. The requested sku update would cause the master server to have a larger max_connections value than its replica(s)."

The same goes for when I try to update the storage size. The message tells me that I should first upgrade the replica, then the master (the same message can be seen in the Azure UI, actually).

Panic Output

N/A

Expected Behaviour

When Terraform creates the resource, it knows that he should first create the source_server (the master) then the replica. But when Terraform updates the resource, it should also know that it has to first update the replicas, then the master.

Actual Behaviour

Terraform updates the master first :( .

I can't use depends_on as it will break creation.

Steps to Reproduce

  1. Use the small module above
  2. Run terraform apply

Important Factoids

N/A

References

Did not find any related issues.

@NillsF
Copy link
Contributor

NillsF commented Feb 19, 2021

Some additional information about the scaling. I did some research and scaling up and down requires different logic.

Scaling up: First scale up a replica's compute, then scale up the primary. This order will prevent errors from violating the max_connections requirement.
Scaling down: First scale down the primary's compute, then scale down the replica. If you try to scale the replica lower than the primary, there will be an error since this violates the max_connections requirement.
Source

WodansSon added a commit that referenced this issue Mar 6, 2021
* AccTest for replicaset scaling

* Raw implementation of scalable Postgres replicaset

* Only change SKU for replica

* Extend checks for acctests

* Simplification by removing downscaling replicas from primary

* Little refactor to increase readability

* Fix createReplica test and reuse testcode

* Comment fixes

* Linting

* Update website/docs/r/postgresql_server.html.markdown

Co-authored-by: WS <20408400+WodansSon@users.noreply.github.com>

Co-authored-by: WS <20408400+WodansSon@users.noreply.github.com>
@ghost
Copy link

ghost commented Apr 5, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Apr 5, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants