Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Destroy provisioners not working if resource is using "create_before_destroy" lifecycle #13395

Closed
daniel-ro opened this issue Apr 5, 2017 · 6 comments

Comments

@daniel-ro
Copy link

daniel-ro commented Apr 5, 2017

Terraform Version

Terraform v0.9.2

Affected Resource(s)

  • aws_instance
  • provisioners (on destroy)

Terraform Configuration Files

provider "aws" {
  region = "us-east-1"
}

resource "aws_instance" "test" {
  ami = "ami-f4cc1de2" # ubuntu 16.4
  instance_type = "t2.micro"
  lifecycle {
    create_before_destroy = true
  }
  provisioner "remote-exec" {
    when                  = "destroy"
    inline                = [
      "echo 'not running!'",
      "exit 1"
    ]
    connection {
      user                = "ubuntu"
      host                = "${self.private_ip}"
      private_key         = "${file("PATH/TO/KEY_FILE.PEM")}"
    }
  }
}

Debug Output

aws_instance.test: Still creating... (10s elapsed)
aws_instance.test: Still creating... (20s elapsed)
aws_instance.test: Still creating... (30s elapsed)
aws_instance.test: Still creating... (40s elapsed)
aws_instance.test: Creation complete (ID: i-XXXXXX)
aws_instance.test (deposed #0): Destroying... (ID: i-XXXXXX)
aws_instance.test (deposed #0): Still destroying... (ID: i-XXXXXX, 10s elapsed)
aws_instance.test (deposed #0): Still destroying... (ID: i-XXXXXX, 21s elapsed)
aws_instance.test (deposed #0): Still destroying... (ID: i-XXXXXX, 31s elapsed)
aws_instance.test (deposed #0): Still destroying... (ID: i-XXXXXX, 41s elapsed)
aws_instance.test (deposed #0): Destruction complete
Apply complete! Resources: 1 added, 0 changed, 1 destroyed.

Expected Behavior

Destroy Provisioner should run before "deposed" instance termination, and on failure if happened and not "on_failure"="continue", prevent destruction.

Actual Behavior

Deposed resource destroyed without running the destroy provisioner.

Steps to Reproduce

  1. terraform apply
  2. change resource ami (to force new resource)
  3. run terraform apply again

References

@apparentlymart
Copy link
Contributor

Hi @daniel-ro! Sorry this isn't working as expected.

I was able to reproduce this using the null_resource resource, which is always nice because it makes for faster iteration...

resource "null_resource" "foo" {
  triggers {
    val = "1"
  }

  provisioner "local-exec" {
    command = "echo create provisioner"
  }
  provisioner "local-exec" {
    when = "destroy"
    command = "echo destroy provisioner"
  }

  lifecycle {
    create_before_destroy = true
  }
}
$ terraform apply
null_resource.foo: Refreshing state... (ID: 382141014828749646)
null_resource.foo: Creating...
  triggers.%:   "" => "1"
  triggers.val: "" => "2"
null_resource.foo: Provisioning with 'local-exec'...
null_resource.foo (local-exec): Executing: /bin/sh -c "echo create provisioner"
null_resource.foo (local-exec): create provisioner
null_resource.foo: Creation complete (ID: 5007684296399742455)
null_resource.foo (deposed #0): Destroying... (ID: 382141014828749646)
null_resource.foo (deposed #0): Destruction complete

Apply complete! Resources: 1 added, 0 changed, 1 destroyed.

I'm going to dive in and try to figure out what's going on here. Thanks for reporting this!

@apparentlymart
Copy link
Contributor

This has a similar interesting challenge to #13097, whereby we don't retain enough information in state to know what provisioners were present when a resource was created.

I'm intending to try to make Terraform run the destroy provisioners that are in config at the time the deposed resource is destroyed, but this will have an edge case:

If a resource already has an instance and I later add a destroy provisioner to it, replacing that resource will cause the destroy provisioner to run against the existing instance even though it might never have had a corresponding create provisioner run on it.

This seems like an acceptable tradeoff for now, but it will add one more gotcha to the existing set of gotchas for destroy-time provisioners.

@daniel-ro
Copy link
Author

daniel-ro commented Apr 7, 2017

@apparentlymart thank you for your fast response!
looks like provisioners starting to be more and more complicated, to me I see two options here:

  • think of provisioners as a lightweight configuration system, and keep track of them in some way (hashing may be an easy start, the same as template_file do). this will also benefit an update provisioners someday.
  • treat them as a simple hooking system of Terraform, that run them on creation or destruction time only if they are exist in the configuration file at the time of execution, regardless of the time they were added and without relation to any specific instance of a resource. I would go with this approach, as it's simpler, and more inline with Terraform as a product for "Infrastructure as code" rather than a configuration tool.

@apparentlymart
Copy link
Contributor

Unfortunately I'm going to have to put this one on the back-burner for a little while since its solution is quite a lot more complex than I originally expected, related to my earlier comment.

We understand that this limitation makes destroy provisioners hard to use, so we will hopefully return to this in the near future but I am removing myself as the assignee just to represent that I'm not pro-actively working on this right now and that someone else on the team could pick it up.

@apparentlymart apparentlymart removed their assignment Apr 11, 2017
@apparentlymart
Copy link
Contributor

Since we have found a few different cases of this similar issue now, I'm going to consolidate this into #13549 as an umbrella, since I expect the fix for all of these will be similar and tackled at the same time.

@ghost
Copy link

ghost commented Apr 14, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 14, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants