Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Destructive dependencies #16065

Closed
zopanix opened this issue Sep 11, 2017 · 10 comments
Closed

Feature Request: Destructive dependencies #16065

zopanix opened this issue Sep 11, 2017 · 10 comments

Comments

@zopanix
Copy link
Contributor

zopanix commented Sep 11, 2017

Hey,

I'm asking for a new feature in Terraform. There probably is a hack to work around this problem (although I haven't found it yet). You'll find the details below and a hint to how I would expect it to work.

Scenario

I create my servers with a boot_disk which is created seperately. When I update that boot disk, terraform fails because the server is not the resource that is being updated, but the disk is. And I cannot dettach and reattach the boot disk of an existing server ( I'm using this on GCP ) while it is running. What I would need is for terraform to know that when my disk resource is recreated, it should also recreate the server that comes with it.

Proposed solution

I propose to be able to add a taint_dependencies on any resource. (I'll illustrate with an example below). Meaning that you can say to terraform that if the resource that has taint_dependencies gets deleted or recreated, so should it's taint_dependencies.

Example

resource "a" {
  taint_dependencies: ["ref_to_resource_b"]
  foo: "bar"
}

resource "b" {
  something: "stuff"
  other: "${ref_to_resource_a}"
}

In the following example, on creation, resource a is created first and then resource b. But if attribute foo is changed to value baz by default, terraform will do nothing with resource b. But in my case I want resource b to get recreated as well.

Additionnally, I think that in some cases like mine. I would expect terraform to be smart and know about it. Because it seems normal you cannot swap the boot disk of a VM. So it should be standard behavior in some cases.

Additionnal information

Related bugs: hashicorp/terraform-provider-google#78
I know there is not a lot of activity on that bug, but I was able to work around it for now, not anymore.

@zopanix
Copy link
Contributor Author

zopanix commented Sep 11, 2017

BTW: I looked in open issue for something similat but did not find anything and I'm not a native english speaking person so my apologies for any language problems

@apparentlymart
Copy link
Contributor

Hi @zopanix! Thanks for this suggestion.

This is indeed a problem with some combinations of resources where Terraform doesn't currently have enough information about the underlying system to know what it needs to do.

Another example we know of is hashicorp/terraform-provider-aws#1315, where a load balancer target group can't be replaced without also replacing its listeners.

I agree that Terraform should ideally just know that these interactions exist as part of the resource definitions. That's not possible with the current way that resources are modeled, but I'd like to see if we could design a way to express that in future.

In the mean time, a manual way of expressing it like you propose here could be a good workaround. I think the most crucial part of this behavior is making sure that the dependency graph gets built correctly to ensure the following sequence of actions:

  • destroy b
  • destroy a
  • re-create a
  • re-create b

This also needs to behave a bit differently when the create_before_destroy mode is enabled:

  • depose b
  • depose a
  • create new a
  • create new b
  • destroy deposed b
  • destroy deposed a

This seems theoretically possible to achieve, but previous experience with this sort of change makes me want to be cautious and prototype this a bit first to make sure the behavior makes sense for the various examples of this problem. In particular, this would be the first time we've had something that can potentially create long chains of interleaved create/destroy actions and so we'd need to make sure it works well with transitive dependencies.


Another similar way we could approach this is to generalize it to allow arbitrary "forces new resource" values on any resource. Something like this:

resource "aws_alb_target_group" "service" {
  name     = "tf-my-service"
  port     = 80
  protocol = "HTTP"
  vpc_id   = "vpc-c0ffeffe"
}

resource "aws_alb_listener" "service" {
  load_balancer_arn = "${aws_alb.service.arn}"
  port              = "80"

  default_action {
    target_group_arn = "${aws_alb_target_group.service.arn}"
    type             = "forward"
  }

  lifecycle {
    replace_on_change = {
      target_group_id = "${aws_alb_target_group.service.id}"
    }
  }
}

In this case, Terraform would plan to replace aws_alb_listener.service whenever aws_alb_target_group.service.id is changed, which it would be if the target group is being replaced. (It would be changing to <computed>.)

This was something that was previously discussed in the comments of #8099. It requires the same care to ensure that the creates and destroys happen in the correct order, but it gives more flexibility to trigger replacement via any changing value in Terraform, rather than just the replacement of resources.

Thanks again for the suggestion!

@zopanix
Copy link
Contributor Author

zopanix commented Sep 11, 2017

Hey @apparentlymart, I was warned that this would be a tricky one to push. And I'm aware of that there are different approaches to this problem. I was thinking maybe we could benefit from the existing tainting mechanism that already should handle some of the cases except the chained tainting which does not exist in a single command. I saw this more like a automatic tainting mechanism and if I'm not mistaking the chained one should be orchestrated as a LIFO. And it should be good. I wouldn't add the intelligence in the provider plugin to automatically add chained destruction just yet, I would maybe start of with only explicit chained deletion. That being said, I understand you guys want to think this through before pushing anything.

@zopanix
Copy link
Contributor Author

zopanix commented Sep 11, 2017

Btw, if you know of any work arounds to this I'll gladly take them. I'm currently thinking of having my CI parse the plan and "manually" taint the resources that need it. Not an ideal solution but in my case I think it is the best one for now.

@jakauppila
Copy link

Not sure if this quite belongs here, but I have a use-case not too dissimilar except it requires more a modification rather than re-creation.

resource "a" {
  foo: "bar"
}

resource "b" {
  something: "stuff"
  other: "${ref_to_resource_a}"
}

Going with this example again, on creation, resource "a" is created first and then resource "b". I need to destroy and re-create resource "a" with new values, but when performing the delete, I need to discover the relationship to resource "b" and perform an explicit remove on the resource prior to being able to delete resource "a".

@bflad bflad added the lifecycle label Dec 1, 2021
@pspot2
Copy link

pspot2 commented Dec 23, 2021

##8617

@pspot2
Copy link

pspot2 commented Dec 27, 2021

#13593

@pspot2
Copy link

pspot2 commented Dec 27, 2021

I've counted roughly 150 likes across the 11 issues linked here (including this issue's own likes). What would be the threshold of giving this issue a higher priority?

@jbardin
Copy link
Member

jbardin commented Jun 7, 2022

Sorry, this slipped past when closing out the issues related to the new replace_triggered_by feature.

Thanks!

@jbardin jbardin closed this as completed Jun 7, 2022
Atry added a commit to Atry/terraform-aws-ecs-alb that referenced this issue Jul 6, 2022
According to hashicorp/terraform#16065 (comment), the target must be replaced when a listener is replaced
@github-actions
Copy link

github-actions bot commented Jul 8, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 8, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants