-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
providers/aws: aws_autoscaling_group should depend on its aws_launch_configuration #1109
Comments
This is a separate issue. I'm not sure if we're tracking it, but this is due to the "eventually consistent" nature of AWS. I'm pretty sure there is a separate issue for this where we just have to do a stupid loop on the ASG to make it happen. |
I don't think terraform is trying to delete the ASG. I'm noticing the same issue when I update the AMI for a launch configuration. The plan only implies that the launch configuration will be changed, so as expected when I try to apply the plan the error @pmoust posted is returned.
|
There's also an issue when making a change to both the autoscaling group and launch configuration at the same time where terraform deletes the launch configuration and then tries to modify it (which throws a launch configuration not found error). To recreate try to change the AMI of a launch configuration, and change the launch configuration of an autoscaling group. Terraform shows the following plan:
On execution the below error happens when Terraform deletes the launch configuration before it tries to modify it:
One additional thing to note is that changing a launch configuration should not require destroying the ASG as changing the launch configuration is part of the AutoScaling API: |
I believe this is a duplicate of #532. |
targeted update to the launch_configuration 👍 |
I believe the original problem stated here is resolved in #1353 . |
@catsby nop, this issue still persists |
👍 issue still present. Also the SDK doesn't allow for LC updates, so the strategy would have to either be to create a new LC and associate it to the autoscaling groups before trying to destroy the old, or to destroy the autoscaling groups and recreating them. I would prefer the creation of a new LC and then re-associating it. |
Any updates on this one? Hit it yesterday myself... ASG's definitely take some time to delete (mainly if their terminating instances linked to them), but I found I can delete the LC's almost straight away via the console as soon as the ASG delete has been initiated. Not sure how that translates via the API though... |
The underlying problem here is that launch configurations are effectively immutable, so an update generally requires creation of a new launch config, and update (not new resource) of the autoscaling group to use it. Attempting to delete the existing launch config so it can be rebuilt with the same name will produce an error, as it is in use by the ASG. In my own scripts I generally create new LCs with timestamps in the name, and update the ASG with the new name. Old LCs are kept around for a short time (in case of rollback) and then deleted once unused. I'm not sure if there is an idiomatic way for terraform to handle this situation, which doesn't quite fit into the usual |
Recently discovered a (as far as I can tell) undocumented feature that helps with this problem at a terraform talk by @justincampbell. By adding the lifecycle create before destroy option as seen below, terraform will create the new launch configuration and attach it to your ASG before destroying the old one.
|
@jessem note that to avoid LC name collisions when using |
This issue is still valid for |
Just ran into this as well. This is likely effecting any stack that uses the common elb+autoscale pattern. |
You don't have to explicitly delete an This is an attempt to roll out a new AMI for Vault. The AMI is an $ terraform plan
~ aws_autoscaling_group.hetzner-development-vault
launch_configuration: "hetzner-development-vault" => "${aws_launch_configuration.hetzner-development-vault.id}"
-/+ aws_launch_configuration.hetzner-development-vault
associate_public_ip_address: "false" => "0"
ebs_block_device.#: "0" => "<computed>"
ebs_optimized: "false" => "<computed>"
image_id: "ami-31c2b946" => "ami-52367225" (forces new resource)
instance_type: "t2.micro" => "t2.micro"
key_name: "heisenberg" => "heisenberg"
name: "hetzner-development-vault" => "hetzner-development-vault"
root_block_device.#: "0" => "<computed>"
security_groups.#: "3" => "3"
security_groups.1980580733: "sg-5b18233e" => "sg-5b18233e"
security_groups.3077299940: "sg-27182342" => "sg-27182342"
security_groups.54734731: "sg-bac3f4df" => "sg-bac3f4df"
user_data: "8e154addf6c9fc4833b86db7b8192c4cf328514a" => "8e154addf6c9fc4833b86db7b8192c4cf328514a" Hmmm, okay, there's a lot of unexpected noise, but whatever, let's take that for a spin: $ terraform plan
...
aws_launch_configuration.hetzner-development-vault: Destroying...
aws_launch_configuration.hetzner-development-vault: Error: 1 error(s) occurred:
* ResourceInUse: Cannot delete launch configuration hetzner-development-vault because it is attached to AutoScalingGroup hetzner-development-vault
status code: 400, request id: [dbbc99c7-1995-11e5-a2f4-33bade2894bd]
Error applying plan:
2 error(s) occurred:
* ResourceInUse: Cannot delete launch configuration hetzner-development-vault because it is attached to AutoScalingGroup hetzner-development-vault
status code: 400, request id: [dbbc99c7-1995-11e5-a2f4-33bade2894bd]
* aws_autoscaling_group.hetzner-development-vault: diffs didn't match during apply. This is a bug with Terraform and should be reported.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure. |
The create_before_destroy trick doesn't work for us - it causes terraform to declare a cycle between the ASG and the LC. |
Not being able to update a launch configuration's user-data or AMI is a major issue for my team. Something like the way suggested in #1552 would be ideal for us. |
@mitchellh, I still run into these issue with clean deploys and minor changes. One can carefully taint the right resource to work around the bug, but this is becoming a more important/painful snag for us.. while I don't want to add pressure, I would appreciate your feedback on where this and/or #1552 sit in the roadmap. Thanks! |
@ketzacoatl for now you can tag launch configurations as @ajlanghorn this might help you too. Also @joekhoobyar, you may not be tagging the things the LC depends on (and its dependencies and so on) as |
Bump, just bit by this as well. |
+1. Currently we're manually deleting the LC and ASG before running apply. I added create_before_destroy to all the places that I could see as interlinked at this point in the graph but it still returns the |
@blewa Did you try removing the |
@pikeas It looks like that's a required field...am I missing something? |
@blewa the docs are out of date, terraform will generate that field for you if left out. |
I concat the AMI to end of the name so that they never collide and I still On Fri, Jul 24, 2015 at 7:59 PM, Jesse Szwedko notifications@github.com
|
@pikeas - Here's what I get when I remove the name field:
|
You're right, I had my wires crossed a bit. Remove the name on the launch configuration resource, not the ASG resource. |
Good call, @stack72 - thanks! "This issue can be closed" pings are my favorite pings. 😀 |
Am I correct in understanding: We should allow Terraform to manage the name of the ASG, and all will be well when we create a new LC? Or what is the proper flow here? |
@ketzacoatl there was an update to the LaunchConfig docs last month:
Make sense? |
@stack72: Perfect! I missed that update, thanks for sharing. |
@ketzacoatl no worries, shout me if there are any issues with it |
Terraform v0.6.6 Case: Change in cloudinit template of an AS Launch Configuration.
I proceeded to add a lifecycle policy block (per discussion above).
Here's how the tf config looks like resource "template_file" "elk-cloud-init_staging" {
filename = "./templates/elk-staging.yml"
vars {
channel = "stable"
reboot-strategy = "off"
role = "elk"
}
}
# base coreos CoreOS launch configuration
resource "aws_launch_configuration" "elk_staging" {
instance_type = "t2.large"
image_id = "ami-37bdc15d" # 766.5.0
security_groups = [ "${aws_security_group.pph_coreos_staging.id}",
"${aws_security_group.elk_staging.id}",
"${aws_security_group.pph_allow_vpn.id}",
"${aws_security_group.pph_admins.id}" ]
root_block_device {
volume_size = 180
volume_type = "gp2"
delete_on_termination = true
}
user_data = "${template_file.elk-cloud-init_staging.rendered}"
iam_instance_profile = "${aws_iam_instance_profile.coreos.name}"
associate_public_ip_address = true
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "elk_staging" {
name = "elk_staging"
availability_zones = [ "${aws_subnet.pph.*.availability_zone}" ]
vpc_zone_identifier = [ "${aws_subnet.pph.*.id}" ]
load_balancers = [ "${aws_elb.elk_staging.id}" ]
min_size = 3
max_size = 3
desired_capacity = 3
health_check_type = "EC2"
health_check_grace_period = 120
force_delete = true
launch_configuration = "${aws_launch_configuration.elk_staging.name}"
tag {
key = "Name"
value = "ELK Staging"
propagate_at_launch = true
}
tag {
key = "Environment"
value = "Staging"
propagate_at_launch = true
}
tag {
key = "Role"
value = "ELK"
propagate_at_launch = true
}
} |
Disregard my last comment. The cyclic dependency issue was due to the As soon as a lifecycle policy was added in the Sorry for necromancing the thread. |
I am still running into this issue even though I have removed the names from the ASG's. Here I forgot to add the proper SSH key name into the LC so I do so:
Then I attempt to do the apply:
This is with a version compiled out of master at this commit. Here is one of the LC/ASG configs:
|
Ok so it seems that my lifecycle block was in the wrong place. It belongs on the launch config, not the ASG. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
providers/aws:
aws_autoscaling_group
should depend on itsaws_launch_configuration
When destroying an
aws_autoscaling_group
along with its assignedaws_launch_configuration
I expect Terraform to first issue a destroy on the ASG, wait till its done, then delete the launch configuration.What happens is;
To reproduce, tweak create and then remove the following
I think it was working before.
Terraform v0.4.0-dev (23d90c0c02c10596eed79986e356b20bc6abb441)
The text was updated successfully, but these errors were encountered: