Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Protection against accidental Cluster deletion #79

Open
patrickherrera opened this issue Aug 3, 2022 · 11 comments
Open
Labels
enhancement New feature or request question Further information is requested

Comments

@patrickherrera
Copy link

Any chance you could add a flag to enable "delete_protection", and as long as that is true, the Cluster cannot be deleted. I feel it is too easy to make a mistake and have Terraform blow away your Cluster - this would at least make it a two-step process and unlikely to happen accidentally.
I'm thinking a similar pattern to that used by AWS RDS Databases (taken from the AWS TF Provider docs):

deletion_protection - (Optional) If the DB instance should have deletion protection enabled. The database can't be deleted when this value is set to true. The default is false.

@linouk23
Copy link
Contributor

linouk23 commented Aug 4, 2022

Thanks for creating an issue @patrickherrera!

That's a great observation! Did you consider using prevent_destroy lifecycle attribute?

Let me know if it helps!

@linouk23 linouk23 added the question Further information is requested label Aug 4, 2022
@patrickherrera
Copy link
Author

Thanks, unfortunately that only prevents destructive changes to a resource, which I hope would never apply to a Cluster anyway. It won't prevent issues where the resource is accidentally removed from the source or some other silly developer error, since the prevent_destroy directive will be lost as well and not applied.

@linouk23 linouk23 added the enhancement New feature or request label Aug 4, 2022
@linouk23
Copy link
Contributor

linouk23 commented Aug 4, 2022

It won't prevent issues where the resource is accidentally removed from the source

Do you mean scenarios where a resource definition is commented out / removed from TF configuration file?

@patrickherrera
Copy link
Author

Correct. Or even potentially renamed and Terraform thinks it should destroy/recreate.
It is one of those low-probability/high-impact things that should be relatively easy to implement and save a lot of heartache

@linouk23
Copy link
Contributor

linouk23 commented Aug 6, 2022

@patrickherrera just to follow up, thanks again for the suggestion!

Looks like TF Provider for GCP uses deletion_protection that defaults to true for google_bigquery_table resource too.

I wonder what's your opinion about the default value, make it true and have the following UX:

: On newer versions of the provider, you must explicitly set deletion_protection=false (and run terraform apply to write the field to state) in order to destroy an instance. It is recommended to not set this field (or set it to true) until you're ready to destroy.

or default to false as aws_db_instance does.

@patrickherrera
Copy link
Author

In a way it is a breaking change so I'm not sure if you have a particular policy for that for your module. I think it is fine to default to false so it doesn't surprise existing users, and people can read the changelog and opt-in if they wish

@linouk23
Copy link
Contributor

linouk23 commented Aug 8, 2022

@patrickherrera that makes sense to me, thanks! Yeah I think if we default deletion_protection to false it won't be a breaking change.

Are there any other existing resources (except conluent_kafka_cluster) where you think adding deletion_protection might be useful?

image

I'm thinking about confluent_kafka_topic and confluent_connector.

@patrickherrera
Copy link
Author

Good question. If it defaults to false then no harm adding it to those extra resources. Anything that involves potential loss of data makes a lot of sense, so yes, topics and connectors would be good.

@linouk23
Copy link
Contributor

Or even potentially renamed and Terraform thinks it should destroy/recreate.

btw as a follow up, we investigated the specific use case of a possible recreation (destroy/create) (when updating an attribute that has got ForceNew: true) and it seems like prevents_destroy = true will help with that. See more details in this doc.

It seems like the use case for using deletion_protection attribute is limited to commenting out / deletion of resource definition in TF configuration. Internally we're still figuring out what's the best default value for deletion_protection attribute.

@patrickherrera
Copy link
Author

Renaming the Resource will also cause it to be deleted and recreated, and I have just confirmed that prevent_destroy doesn't have an effect here - Terraform happily deleted the old cluster and created a new one with the new name:

confluent_kafka_cluster.cluster: Destroying... [id=lkc-XXX]
confluent_kafka_cluster.cluster: Destruction complete after 2s
confluent_kafka_cluster.CC_cluster: Creating...
confluent_kafka_cluster.CC_cluster: Creation complete after 7s [id=lkc-YYY]

prevent_destroy did stop me running 'terraform destroy' though, and said I had to remove it first.

@linouk23
Copy link
Contributor

linouk23 commented Aug 19, 2022

update: as a quick attempt to mitigate an issue, we added a note about using prevents_destroy to our docs:
image

airlock-confluentinc bot pushed a commit that referenced this issue Nov 14, 2024
APIT-2642 - Add Environment field as required for all flink UDF commands
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants