Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CLOSED] recreating a node doesn't recreate the pool attachment that depends on it #113

Open
RavinderReddyF5 opened this issue Aug 14, 2020 · 4 comments

Comments

@RavinderReddyF5
Copy link
Owner

Issue by DavidGamba
Wednesday Apr 17, 2019 at 16:46 GMT
Originally opened as https://github.com/terraform-providers/terraform-provider-bigip/issues/82


Starting from a basic pool:

resource "bigip_ltm_pool" "dgamba-test-pool" {
  name = "/Common/dgamba-test-pool"

  load_balancing_mode = "least-connections-member"

  monitors   = [""]
  allow_snat = "yes"
  allow_nat  = "yes"
}

resource "bigip_ltm_pool_attachment" "dgamba-test-pool--dgamba-test-node-1" {
  pool = "${bigip_ltm_pool.dgamba-test-pool.name}"
  node = "${bigip_ltm_node.dgamba-test-node-1.name}:8080"
}

resource "bigip_ltm_node" "dgamba-test-node-1" {
  name = "/Common/dgamba-test-node-1"

  address = "10.169.20.244"

  connection_limit = "0"
  dynamic_ratio    = "1"
  monitor          = "default"
  rate_limit       = "disabled"
  state            = "user-up"  # "user-down"

  fqdn {
    address_family = "ipv4"
  }
}

Then change the IP of the node or change from IP to DNS entry:

Terraform will perform the following actions:

-/+ bigip_ltm_node.dgamba-test-node-1 (new resource required)
      id:                    "/Common/dgamba-test-node-1" => <computed> (forces new resource)
      address:               "10.169.20.244" => "dgamba-test-node-1.example.com" (forces new resource)
      connection_limit:      "0" => "0"
      dynamic_ratio:         "1" => "1"
      fqdn.#:                "1" => "1"
      fqdn.0.address_family: "ipv4" => "ipv4"
      monitor:               "default" => "default"
      name:                  "/Common/dgamba-test-node-1" => "/Common/dgamba-test-node-1"
      rate_limit:            "disabled" => "disabled"
      state:                 "user-up" => "user-up"


Plan: 1 to add, 0 to change, 1 to destroy.

Only the node is changing, the node attachment is not affected in the plan. Then apply:

bigip_ltm_node.dgamba-test-node-1: Destroying... (ID: /Common/dgamba-test-node-1)

Error: Error applying plan:

1 error(s) occurred:

* bigip_ltm_node.dgamba-test-node-1 (destroy): 1 error(s) occurred:

* bigip_ltm_node.dgamba-test-node-1: 01070110:3: Node address '/Common/dgamba-test-node-1' is referenced by a member of pool '/Common/dgamba-test-pool'.

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Trying to add depends_on on the pool attachment didn't work.
Changes on the node that require a destroy/create should taint the pool attachment.

@RavinderReddyF5
Copy link
Owner Author

Comment by dannyk81
Wednesday Apr 17, 2019 at 17:03 GMT


@DavidGamba I see you mentioned you tried to add depends_on to bigip_ltm_pool_attachment but you didn't share what you tried.

Since this is an intermediary resource (representing a relationship between two other resources) you should define the dependency (using depends_on attribute) between the related resources.

Can you try the below:

resource "bigip_ltm_pool_attachment" "dgamba-test-pool--dgamba-test-node-1" {
  pool = "${bigip_ltm_pool.dgamba-test-pool.name}"
  node = "${bigip_ltm_node.dgamba-test-node-1.name}:8080"

  depends_on = ["bigip_ltm_node.dgamba-test-node-1", "bigip_ltm_pool.dgamba-test-pool"]
}

@RavinderReddyF5
Copy link
Owner Author

Comment by DavidGamba
Wednesday Apr 17, 2019 at 17:48 GMT


@dannyk81 Thanks for the quick reply. IMHO the dependency should be implicit.
Even when it is explicit though, I get exactly the same results, only the node is marked for rebuild on the plan and the apply fails:

--- main.tf
+++ main.tf
@@ -229,12 +229,14 @@ resource "bigip_ltm_pool" "dgamba-test-pool" {
 resource "bigip_ltm_pool_attachment" "dgamba-test-pool--dgamba-test-node-1" {
   pool = "${bigip_ltm_pool.dgamba-test-pool.name}"
   node = "${bigip_ltm_node.dgamba-test-node-1.name}:8080"
+
+  depends_on = ["bigip_ltm_node.dgamba-test-node-1", "bigip_ltm_pool.dgamba-test-pool"]
 }
 
 resource "bigip_ltm_node" "dgamba-test-node-1" {
   name = "/Common/dgamba-test-node-1"
 
-  address = "10.169.20.244"
+  address = "valid-dns.example.com"
 
   connection_limit = "0"
   dynamic_ratio    = "1"

I had only tried with depends_on = ["bigip_ltm_node.dgamba-test-node-1"] before.

@RavinderReddyF5
Copy link
Owner Author

Comment by dannyk81
Wednesday Apr 17, 2019 at 18:52 GMT


Hey @DavidGamba, thanks for the additional details.

The whole dependency management in terraform is rather counter intuitive, indeed the above would not help (we are using a slightly different operational flow so I didn't hit this before).

The dependency tree (whether derived implicitly or defined explicitly) only defines the order of the operations, however it doesn't mark the dependant resource for recreation, there's a good explanation here: hashicorp/terraform#8099 (comment) and also a related issue here: hashicorp/terraform#16200

One way to work around this, is to change the node's name attribute when applying a change that causes the node to be recreated (not pretty but it works), since you are referencing bigip_ltm_pool.test_pool1.name in the pool attachment resource, once that attribute is changed it will force a recreate on that (pool_attachment) resource as well.

Here's my sample code:

resource "bigip_ltm_node" "test_node1" {
  name = "/Common/test_node1"
  address = "11.11.11.11"
}

resource "bigip_ltm_pool" "test_pool1" {
  name = "/Common/test_pool1"
  load_balancing_mode = "round-robin"
  allow_snat = "yes"
  allow_nat = "yes"
}

resource "bigip_ltm_pool_attachment" "test_attach1" {
  pool = "${bigip_ltm_pool.test_pool1.name}"
  node = "${bigip_ltm_node.test_node1.name}:80"
}

I applied the above and now would like to change the address of test_node1, so I'm doing this:

diff --git a/bigip.tf b/bigip.tf
index 3358dae..8f7a7f7 100644
--- a/bigip.tf
+++ b/bigip.tf
@@ -1,6 +1,6 @@
 resource "bigip_ltm_node" "test_node1" {
-  name = "/Common/test_node1"
-  address = "11.11.11.11"
+  name = "/Common/test_node1a"
+  address = "11.11.11.12"
 }
 
 resource "bigip_ltm_pool" "test_pool1" {

and now plan shows:

bigip_ltm_node.test_node1: Refreshing state... (ID: /Common/test_node1)
bigip_ltm_pool.test_pool1: Refreshing state... (ID: /Common/test_pool1)
bigip_ltm_pool_attachment.test_attach1: Refreshing state... (ID: /Common/test_pool1-/Common/test_node1:80)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

-/+ bigip_ltm_node.test_node1 (new resource required)
      id:               "/Common/test_node1" => <computed> (forces new resource)
      address:          "11.11.11.11" => "11.11.11.12" (forces new resource)
      connection_limit: "0" => "0"
      dynamic_ratio:    "1" => "0"
      monitor:          "default" => ""
      name:             "/Common/test_node1" => "/Common/test_node1a" (forces new resource)
      rate_limit:       "disabled" => ""
      state:            "user-up" => "user-up"

  ~ bigip_ltm_pool.test_pool1
      monitors.#:       "1" => "0"
      monitors.0:       "" => ""

-/+ bigip_ltm_pool_attachment.test_attach1 (new resource required)
      id:               "/Common/test_pool1-/Common/test_node1:80" => <computed> (forces new resource)
      node:             "/Common/test_node1:80" => "/Common/test_node1a:80" (forces new resource)
      pool:             "/Common/test_pool1" => "/Common/test_pool1"


Plan: 2 to add, 1 to change, 2 to destroy.

------------------------------------------------------------------------

and apply works as well:

bigip_ltm_pool_attachment.test_attach1: Destroying... (ID: /Common/test_pool1-/Common/test_node1:80)
bigip_ltm_pool.test_pool1: Modifying... (ID: /Common/test_pool1)
  monitors.#: "1" => "0"
  monitors.0: "" => ""
bigip_ltm_pool_attachment.test_attach1: Destruction complete after 1s
bigip_ltm_node.test_node1: Destroying... (ID: /Common/test_node1)
bigip_ltm_node.test_node1: Destruction complete after 0s
bigip_ltm_node.test_node1: Creating...
  address:          "" => "11.11.11.12"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "0"
  name:             "" => "/Common/test_node1a"
  state:            "" => "user-up"
bigip_ltm_pool.test_pool1: Modifications complete after 1s (ID: /Common/test_pool1)
bigip_ltm_node.test_node1: Creation complete after 0s (ID: /Common/test_node1a)
bigip_ltm_pool_attachment.test_attach1: Creating...
  node: "" => "/Common/test_node1a:80"
  pool: "" => "/Common/test_pool1"
bigip_ltm_pool_attachment.test_attach1: Creation complete after 1s (ID: /Common/test_pool1-/Common/test_node1a:80)

Apply complete! Resources: 2 added, 1 changed, 2 destroyed.

As you can see, the name change forces a new resource - which is what we want, hope this helps.

@RavinderReddyF5
Copy link
Owner Author

Comment by DavidGamba
Wednesday Apr 17, 2019 at 19:22 GMT


Thanks @dannyk81 for a very detailed explanation of the root cause of the issue. I will use the workaround you provided. Hopefully this issue serves as documentation for other users running into this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant