Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

atomic modules may help recreation of depending resources #28880

Closed
mark-00 opened this issue Jun 4, 2021 · 2 comments
Closed

atomic modules may help recreation of depending resources #28880

mark-00 opened this issue Jun 4, 2021 · 2 comments
Labels
enhancement new new issue not yet triaged

Comments

@mark-00
Copy link

mark-00 commented Jun 4, 2021

Current Terraform Version

terraform 0.14.8

Use-cases

Terrafrom is good at creating an destroying chains of depenant resources for instance in the code below first the google_compute_region_instance_group_manager is created and then the google_compute_region_backend_service

resource "google_compute_region_backend_service" "loadbalancer" {
  name          = loadbalancer
  backend {
    group = google_compute_region_instance_group_manager.backend.instance_group
  }
  ...
}

resource "google_compute_region_instance_group_manager" "backend" {
   ...
}

I often run into problems when one of the dependant resources has to be recreated. In this case the google_compute_region_instance_group_manager. Google won't allow this because it is still in use by the loadbalancer. The terraform run will fail.

This is a pattern I encounterd a number of times with different resources

Attempted Solutions

Proposal

A possible solution is marking a module as atomic
When one resources in atomic module has to be recreated/destroyed all the resourcers of that module are first destroyed and than recreated in the correct order.
Maybe this can be implemented as automatic tainting of all the resources in the atomic module.

loadbalancer-backend/main.tf

resource "google_compute_region_backend_service" "loadbalancer" {
  name          = loadbalancer
  backend {
    group = google_compute_region_instance_group_manager.backend.instance_group
  }
  ...
}

resource "google_compute_region_instance_group_manager" "backend" {
   ...
}
main.tf

module "loadbalancer-backend" {
   @atomic
   source ./loadbalancer-backend
   ...
}

References

@mark-00 mark-00 added enhancement new new issue not yet triaged labels Jun 4, 2021
@jbardin
Copy link
Member

jbardin commented Jun 4, 2021

Hi @mark-00,

While we already have some proposals for additional lifecycle control (most likely a feature that causes replacement on a defined change, like #16065 or #11418 or #8099), this seems like a configuration which Terraform should be able to currently handle.

When you have a resource which is "registered" with another resource like this, you almost always need to use create_before_destroy in order to replace or remove any of the resources in the dependency chain. Setting create_before_destroy in google_compute_region_instance_group_manager would allow the new one to be created, then dependent updates can be applied, and finally old instances are destroyed after the replacement resource is in place.

This sometimes takes cooperation with the provider itself, which may to make sure there are provisions to allow the same resource to be created before destroying, but overall works much more smoothly since you don't have to tear down all related resources to replace a single instance.

Since we already have proposals for more precise control over forced replacement, I'm going to close this one out. If the provider is not able to handle using create_before_destroy I would suggest filing an issue with the provider to ensure replacement can happen correctly in general. If you have more questions about how to better structure the configuration, it's better to use the community forum where there are more people ready to help.

Thanks!

@jbardin jbardin closed this as completed Jun 4, 2021
@github-actions
Copy link

github-actions bot commented Jul 5, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 5, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement new new issue not yet triaged
Projects
None yet
Development

No branches or pull requests

2 participants