Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to mark resources as "un-deleteable" #564

Closed
keyneston opened this issue Nov 13, 2014 · 15 comments
Closed

Ability to mark resources as "un-deleteable" #564

keyneston opened this issue Nov 13, 2014 · 15 comments

Comments

@keyneston
Copy link

I'd like to use terraform to manage some things and the destroy option makes it quick to tear down development environments. The issue I have is if I link something like say a s3 bucket for reference then destroy will delete it.

It would be convenient to be able to mark resources as un-deletable. This would also be useful for modules. Define all of your security groups in a central module and make them undeleteable. Then individual projects can reference security groups.

@zadunn
Copy link

zadunn commented Dec 17, 2014

This would be helpful for route53 resources that are shared, between different VPCs. Right now we work around this by having a hardlinked shared.tf into each VPC we build. However, this means we can't call a destroy, since it will destroy any shared resources.

@keyneston
Copy link
Author

So I can probably work around this by creating a module that is shared between repos.

In it it includes hard coded variables with info like what are the route 53, vpc and security group ids that other nodes then rely upon.

Is that my best option for now?

@mitchellh
Copy link
Contributor

Yes, for now. This isn't a very difficult feature to add, and we're going to be doing a big TF feature push very shortly here...

@carimura
Copy link

hey @mitchellh is this one coming soon as part of the "big TF feature push" or should we find a workaround?

@wazoo
Copy link

wazoo commented Mar 20, 2015

This is a great idea, even the ability to disable the destroy command globally would be great. I like to use terraform to deploy environments for customers (that may or may not use terraform in the end despite my prodding) and the ability to turn terraform into a "apply changes but don't destroy" mode would be excellent.

@eukaryote
Copy link

Another emphatic vote for the ability to prevent deletions with absolute certainty from me.

I recently had a surprise deletion of an RDS database when the plan that was being used had reported only an in-place update would be performed. If people use Terraform for things like production databases, then there really needs to be a way to guarantee that a deletion will absolutely never occur with the correct configuration.

@mitchellh
Copy link
Contributor

@eukaryote If the plan said it would do in-place, and you did a terraform apply <plan>, and it did a destroy, this would be a critically terrible bug that needs to be fixed RIGHT NOW in Terraform. We have a lot of test coverage around plan verification and we haven't seen anything slip through yet. If you can reproduce that, please report the bug ASAP for us.

(However, please note that if you ran terraform apply after a terraform plan and didn't specify the plan file specifically, then the state of the world may have changed so it may have indeed done something different).

@eukaryote
Copy link

@mitchellh Upon trying to investigate and dig further, I see that I was misinterpreting the plan output. I saw plan output of:

~ module.main
    2 resource(s)
~ module.db
    1 resource(s)

And I thought the 1 resource in module.db would be modified in-place, but the "~" and yellowish text color don't mean that for modules. When I add "-module-depth=1" to the plan command, it shows the actual changes in the db module will be to delete then recreate, so if I'd have included a "-module-depth" param when planning, then I would have seen that it would be deleted and I'd have known not to apply it and avoided my disaster.

What would you think about a request to default "-module-depth" to "-1" rather than "0", so that the default is to see all the changes that will be applied?

I'm still heavily in favor of easily making it possible to never delete though, even if plan output is misinterpreted like I did or there are any other unexpected issues.

@wazoo
Copy link

wazoo commented Mar 26, 2015

@eukaryote yeah that is what I am afraid of.

I think this is more of a concern in the paradigm where TF is being used to "maintain" an environment (i.e. all changes are done with TF but the environment never dies) vs the tear up/down paradigm that I imagine TF was designed for.

The ability to mark any object as delete=never or something so that it would throw an error if it was going to delete it and would block the apply would be awesome (or it would just ignore it). This would also help if adjustments were made outside of TF and you don't want to over-write them necessarily (maybe a create_only=true tag?).

I don't know if you have the intention of creating 'meta-pramaters' ever (params that apply to every object) but that is where I would imagine something like this would go so it is provider agnostic.

I also was thinking this could be great if you wanted to be able to destroy parts of the environment but not others. i.e. delete everything except VPC and subnets.

@mitchellh
Copy link
Contributor

@wazoo

Great! We'll plan to implement this.

We already designed for these metaparameters. :) These would live in the lifecycle block that exists on every resource already.

And a correction: Terraform was very much designed for ongoing maintenance. But, you have to get a really strong convergence story first before you can tackle lifecycle. With 0.4 we're very comfortably at that stage and we're starting to really go after lifecycle.

This "never delete" feature won't make it for 0.4 but 100% certainly for 0.5.

@eukaryote
Copy link

@mitchellh

What about the idea of making module-depth on plan default to -1 rather than 0, since it's arguably a friendlier and safer default, since nobody could make my mistake of interpreting "yellow ~" output next to a module as "in-place update".

@mitchellh
Copy link
Contributor

@eukaryote Lets move to #1330. I'll CC you there to sub you.

@phinze
Copy link
Contributor

phinze commented Apr 27, 2015

This is done as prevent_destroy (#1566), in master, and slated for the next TF release 👍

@phinze phinze closed this as completed Apr 27, 2015
@tomelliff
Copy link
Contributor

tomelliff commented Aug 3, 2017

I'm currently playing around with some annoying issues with destroying postgres provider resources in RDS and was thinking it would be great if I could just have a lifecycle block with ignore_destroy = true to have it skip trying to destroy the resources because the RDS instance itself is going to be torn out from under it.

I'm about to raise a separate issue in the postgres provider to see if there's a good way of solving it specifically for this use case but would there be any interest in an ignore_destroy lifecycle on resources?

I did have an idea that I might be able to try a destroy provisioner to remove the state entry but while the debug output seemed to show it being properly removed it then carried on with the plan to remove the resource (which failed as normal) and it looks like the edited state didn't get pushed/was overwritten after the failure so a second terraform destroy wouldn't converge to ignore it on the next run.

@ghost
Copy link

ghost commented Apr 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants