Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature request to extend/change lifecycle.prevent_destroy #2159

Open
ketzacoatl opened this issue May 31, 2015 · 25 comments
Open

feature request to extend/change lifecycle.prevent_destroy #2159

ketzacoatl opened this issue May 31, 2015 · 25 comments

Comments

@ketzacoatl
Copy link
Contributor

lifecycle.prevent_destroy is totally awesome, if only because it prevents you from shooting yourself in the foot (or worse..)

But at present, if you enable this flag, and Terraform decides it wants to destroy your instance, you are stuck:

* aws_instance.foobar: plan would destroy, but resource has prevent_destroy set. To avoid this error, either disable prevent_destroy, or change your config so the plan does not destroy this resource.

In other words, there are two logical ways to come at using this flag:

  • prevent op or terraform from destorying something on mistake
  • or to tell terraform to ignore the fact that it wants to delete/recreate an instance of a resource (TF wants to do it for some reason, you have the flag set, TF can't delete).

Terraform covers the first very nicely, but elects to error out completely. If this is optional, or TF otherwise does not error out hard, we can cover the second. I have also worked around this by going into the AWS console and enabling termination protection on the instances in the cluster.. this effectively allows TF to think it can continue, but has AWS tell it no. I do not like doing that, and must be extremely cautious when doing so, for fear of mistake.

Is it a bad idea to tell TF to ignore the reason why it wants to destory an instance of a resource?

My common use case is user_data.. and this often makes life rather annoying and cumbersome. For example, I will create a cluster of hosts using user_data, I will go on to do other things, but in between some small request to open a port comes through.. user_data changed, and TF wants to destroy my cluster.. all I want to do is open a port, but now I'm left here fighting TF to keep it from rm'ing my cluster. When in this situation, I want to tell TF to ignore everything but my one change, and I generally have to find innovative ways to trick TF into leaving one or another resources alone.

@phinze
Copy link
Contributor

phinze commented Jun 1, 2015

Hi @ketzacoatl - thanks for the well explained use case.

Here's what I believe the simplest way to accomplish what you're looking for:

What do you think?

@ketzacoatl
Copy link
Contributor Author

@phinze, I am certainly willing to explore how that plays out. It sounds like it would work to cover all I can throw at it right now.

@JeanMertz
Copy link
Contributor

@phinze I am very interested in this use-case as well.

I'd like to provision the database along with the other infrastructure. But that database is the only piece of the infrastructure that I want to keep around, even after Terraform destroy actions.

However, I also like Terraform to still use its variables when re-creating the infrastructure, so it can reconnect to that same database again.

@ketzacoatl
Copy link
Contributor Author

@JeanMertz, maybe the terraform taint command would help in the meantime, albeit a tiny bit cumbersome.

@JeanMertz
Copy link
Contributor

@ketzacoatl you mean tainting all but the database? Yes, that would be cumbersome given the size of the infrastructure setup.

Also, I am not sure what happens if I taint a resource on which the DB resource depends, it should probably be destroyed as well?

I'm not even sure how that would work with the proposal in this issue. If the aws_db_instance.db_subnet_group_name dependencies are re-created, the DB might possibly have to be recreated as well, so even preventing it from being destroyed does not help in that case.

@ketzacoatl
Copy link
Contributor Author

@JeanMertz

Yes, if you taint a resource the DB has a dependency on, TF will want to rm your db unless you take some action to prevent that. My proposal here would allow me to tell TF not to touch my db, but also not to error out either, just keep on processing the request to recreate the dependency.

@JeanMertz
Copy link
Contributor

@ketzacoatl yes, but then wat? Usually TF makes that decision because that property of the DB instance has to be changed to keep the DB working with the latest infrastructure changes.

So if you change your subnet IDs, and TF has to recreate your DB for it to work with the new subnets, but you prevent it to, and don't want it to show any errors, then you basically end up with a running - but disconnected database and no errors telling you so?

@nathanielks
Copy link
Contributor

I'd like to second the proposed second behavior in the initial description and I'm with @JeanMertz. My ideal use case would be a database that can persist destructions/constructions, as I don't want that data destroyed.

@nathanielks
Copy link
Contributor

After some thinking, I posted a though in #1139 (comment). I'm going to play around with separating the configs and see if that's a viable solution in the interim.

@nathanielks
Copy link
Contributor

Also, can someone explain this portion of the error message?

...or change your config so the plan does not destroy this resource.

How could we change it so that it's not factored into the diff?

@phinze
Copy link
Contributor

phinze commented Jun 24, 2015

@nathanielks that's meant to refer to scenarios where a user has changed a parameter that's forcing the resource to be replaced. It's probably confusing to users invoking terraform destroy. If you file a separate issue we can get the wording fixed up to be clearer 👍

@phinze
Copy link
Contributor

phinze commented Jun 24, 2015

Also as a general update - I'm still feeling like #2018 + #2253 should take care of the use cases described in this thread. If there's consensus around that, we can probably close this thread and folks can track those two features for progress.

@nathanielks
Copy link
Contributor

@phinze will do, and done!

@nathanielks
Copy link
Contributor

@Phize re: other tickets. That sounds about right to me as well!

@ketzacoatl
Copy link
Contributor Author

@JeanMertz, sorry for the delayed reply.

@ketzacoatl yes, but then wat? Usually TF makes that decision because that property of the DB instance has to be changed to keep the DB working with the latest infrastructure changes.

But sometimes TF is just flat out wrong, and I spend precious minutes (or hours!) working to trick Terraform into leaving some particular resource alone when I run terraform apply. This is a fact of using terraform in production, in the real world, where things are not always ideal. It would help tremendously if there was a workflow available for these types of non-ideal situations.

@JeanMertz
Copy link
Contributor

@ketzacoatl thanks. That makes sense. In fact, I too have been seeing situations where Terraform was incorrectly assuming changes, so I can relate to your problems!

@joekhoobyar
Copy link
Contributor

This pull request is a good start as well, by adding an ignore_updates flag to the lifecycle block.

#2525

@rgabo
Copy link

rgabo commented Jul 5, 2015

The closer the environment is to production, the harder to work around this issue. Ideally everything should be recreatable.. MongoDB replica sets instance by instance.. but the reality in our case is that cloud-config userdata will constantly change as we improve our environments but they won't easily be rolled out to staging/production environments. So +1 for #2018, #2253.

Using Atlas and the otherwise great workflow of GitHub pull requests and plan/apply in Atlas makes this even more painful.

@ketzacoatl
Copy link
Contributor Author

To give you a sense of how horrible it can be in production, I have (manually) enabled termination protection on key instances, and I allow terraform to think it'll recreate my SG/instances when applying updates, and I allow terrform to fail in doing so.. I don't want to waste anymore time futzing around here, so while it's messy, it has been the least painful method to deal with this while waiting on the issue.

@rgabo
Copy link

rgabo commented Jul 6, 2015

@ketzacoatl that's what we're doing for the time being, hoping that everything else is applied by Terraform. It does not look nice on Atlas (all runs fail), but it works. Looking forward to a solution, let me know if we can help.

@donnoman
Copy link

donnoman commented Aug 3, 2016

I'm a new user to terraform and I just want to prevent_destroy on the object I don't want destroyed. If its a global don't destroy anything in the entire plan, just make it a global option, what difference does it make if its on a specific object?

The fact its on a specific object suggests that it only affects that object, which is a surprise to a new user when the whole plan can't be destroyed.

If you need a dsl change to add the behavior we are asking for perhaps a

lifecycle {
  ignore_destroy = true
}

would work, but like I said it makes more sense to provide a global prevent_destroy function for your block-the-plan prevent_destroy, and a prevent_destroy on a resource only prevents from destroying that specific resource.

@robolmos
Copy link

robolmos commented Mar 5, 2017

I'd like to +1 this. I'm also interested in easily tearing down a QA environment except for the database instance.

Would also be nice if there could be pre_terminate actions (similar to an OOP destructor method) like making a snapshot of a disk and storing a reference to that snapshot in the state file for future use.

@clundquist-stripe
Copy link

clundquist-stripe commented Mar 28, 2023

I'm surprised variables aren't allowed here as that seems like it would help

resource "aws_subnet" "subnet" {
  count = length(var.azs)

  lifecycle {
    # only let people who really want to destroy a subnet do so by passing -var really_destroy_my_subnets=true
    prevent_destroy = !var.really_destroy_my_subnets
  }
  # ...
 }
terraform plan -var really_destroy_my_subnets=true
Error: Variables not allowed

  on ../../../modules/vpc_subnet/main.tf line 10, in resource "aws_subnet" "subnet":
  10:     prevent_destroy = !var.really_destroy_my_subnets

Variables may not be used here.

@zachsis
Copy link

zachsis commented Aug 31, 2023

2023, and still no movement on implementing a feature like this? or is this tracked somewhere else?

@markmcdon7
Copy link

Hello, is there a solution for this? Seems people have been waiting 10 years.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests