You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One error we run into frequently is ResourceInUse when we modify a template used by a launch configuration connected to an auto-scaling group.
In our use-case we're building microservices, and we have 8 sets of the same resources:
an elastic load balancer
a route53 address
a template for userdata
a launch configuration
an autoscaling group
The only differences between these sets of resources is naming, which means the dependency chain should look identical. They also have a shared set of security groups and iam roles.
When I modify the userdata being loaded into the templates and attempt to apply changes to all 8 services the error occurs only on some of them, while completing successfully for others. The errors look like this:
* ResourceInUse: Cannot delete launch configuration terraform-lcmcovfsibf3pa5hfvkglapybi because it is attached to AutoScalingGroup dev-hub-publish-api-asg
After searching the issue tracker, this appears to be an ongoing problem since terraform 0.3.x:
The work-around suggested in april to add a lifecycle of create_before_destroy works once, but subsequent attempts fail.
Our first thought was maybe the chain of dependencies was creating an issue, so we added the same lifecycle rule to the auto-scaling-group, which gave us a cycle error:
Another option was to remove or disable the lifecycle on every apply attempt, but removing it attempted to destroy and rebuild the entire environment, which failed with an api timeout:
* Post https://ec2.us-east-1.amazonaws.com/: read tcp 205.251.242.7:443: connection reset by peer
While I do not believe terraform is the correct tool to manage deployed software versions long-term, I do think this is a problem that impedes more than just our use-case.
It is worth mentioning a few additional things here:
After adding lifecycle { create_before_destroy = true } to all of the ASG, LC and template_file(s) for the services, we are able to run terraform apply - but we still cannot run terraform destroy without terraform complaining about cycles.
An inability to destroy resources built by terraform would be a complete blocker for us, so this is our highest priority as far as terraform issues go.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
ghost
locked and limited conversation to collaborators
May 1, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
One error we run into frequently is
ResourceInUse
when we modify a template used by a launch configuration connected to an auto-scaling group.In our use-case we're building microservices, and we have 8 sets of the same resources:
The only differences between these sets of resources is naming, which means the dependency chain should look identical. They also have a shared set of security groups and iam roles.
When I modify the userdata being loaded into the templates and attempt to apply changes to all 8 services the error occurs only on some of them, while completing successfully for others. The errors look like this:
After searching the issue tracker, this appears to be an ongoing problem since terraform
0.3.x
:The work-around suggested in april to add a
lifecycle
ofcreate_before_destroy
works once, but subsequent attempts fail.Our first thought was maybe the chain of dependencies was creating an issue, so we added the same lifecycle rule to the auto-scaling-group, which gave us a cycle error:
Another option was to remove or disable the lifecycle on every apply attempt, but removing it attempted to destroy and rebuild the entire environment, which failed with an api timeout:
While I do not believe terraform is the correct tool to manage deployed software versions long-term, I do think this is a problem that impedes more than just our use-case.
For refernece, here is our services terraform script.
Also here are two jenkins builds, which show the behavior failing for only some of the 8 services.
The text was updated successfully, but these errors were encountered: