Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing tempalte resource to use file() wants to recreate everything #6037

Closed
ashb opened this issue Apr 6, 2016 · 3 comments
Closed

Changing tempalte resource to use file() wants to recreate everything #6037

ashb opened this issue Apr 6, 2016 · 3 comments
Labels

Comments

@ashb
Copy link
Contributor

ashb commented Apr 6, 2016

Upon upgrading to Terraform 0.6.14 I now get a warning about

  • template_file.userdata_app: template: looks like you specified a path instead of file contents. Use file() to load this path. Specifying a path directly is deprecated and will be removed in a future version.

Okay I thought, I'll make that change, it seems simple enough.

Except because for some reason terraform things it needs to delete all the resources cos the template has changed, even though the result is the same I end up with this error.

  • Cycle: aws_launch_configuration.app_blue, aws_autoscaling_group.app_blue, aws_launch_configuration.app_blue (destroy), template_file.userdata_app, aws_launch_configuration.app_green, aws_autoscaling_group.app_green, aws_launch_configuration.app_green (destroy), template_file.userdata_app (destroy)

Resources:

resource "template_file" "userdata_app" {
    template = "${file("resources/userdata.yaml")}"

    vars     = {
        TF_ENVIRONMENT   = "${var.environment_name}"
        TF_NODE_FUNCTION = "app"
        TF_NODE_ID       = "-asg"
        TF_REGION        = "${var.vpc.region}"
    }
}

resource "aws_autoscaling_group" "app_blue" {
    depends_on = ["aws_route53_record.logs_local"]

    name                 = "${var.environment_name}_autoscaling_group_app_blue"
    availability_zones   = ["${split(",", var.app_blue.availability_zones)}"]
    min_size             = "${var.app_blue.min_size}"
    desired_capacity     = "${var.app_blue.desired_capacity}"
    max_size             = "${var.app_blue.max_size}"
    force_delete         = true
    launch_configuration = "${aws_launch_configuration.app_blue.id}"
    vpc_zone_identifier  = ["${aws_subnet.app_az0.id}", 
                            "${aws_subnet.app_az1.id}", 
                            "${aws_subnet.app_az2.id}"]
    termination_policies = ["OldestLaunchConfiguration"]

    lifecycle {
      create_before_destroy = true
    }

    tag = {
        key                 = "Name"
        value               = "${var.environment_name}_autoscaling_group_app_blue"
        propagate_at_launch = true
    }
    tag = {
        key                 = "Environment"
        value               = "${var.environment_name}"
        propagate_at_launch = true
    }
    tag = {
        key                 = "Hostname"
        value               = "app"
        propagate_at_launch = true
    }
    tag = {
        key                 = "AWSAccountNumber"
        value               = "${var.aws.account_id}"
        propagate_at_launch = true
    }
    tag = {
        key                 = "AvailabilityZone"
        value               = "undeterminable"
        propagate_at_launch = false
    }
    tag = {
        key                 = "VPC"
        value               = "${aws_vpc.default.id}"
        propagate_at_launch = true
    }
    tag = {
        key                 = "Function"
        value               = "Application Server Mesos Slave Blue Leg"
        propagate_at_launch = true
    }
    tag = {
        key                 = "Tier"
        value               = "app"
        propagate_at_launch = true
    }
}

resource "aws_launch_configuration" "app_blue" {
    name_prefix                 = "${var.environment_name}-launch-configuration-app-blue-"
    image_id                    = "${var.app_blue.ami_image}"
    instance_type               = "${var.app_blue.instance_type}"
    key_name                    = "${var.app_blue.key_name}"
    security_groups             = ["${aws_security_group.base.id}",
                                   "${aws_security_group.app.id}"]
    associate_public_ip_address = false
    iam_instance_profile        = "${aws_iam_instance_profile.app.name}"
    user_data                   = "${template_file.userdata_app.rendered}"

    lifecycle {
      create_before_destroy = true
    }

    root_block_device {
        volume_type = "gp2"
        volume_size = "50"
        delete_on_termination = true
    }
}



@apparentlymart
Copy link
Contributor

Hi @ashb,

I think the problem here is related to how create_before_destroy is being used in your configuration. Unfortunately, due to how that flag works cycles are produced on destroy if a create_before_destroy resource depends on a non-create_before_destroy resource.

The typical workaround for this situation is to also put the create_before_destroy flag on the template_file as well, which is conceptually a strange thing to do but I suggest you do it anyway since create_before_destroy on a logical resource like this is harmless and will simplify Terraform's graph to remove the cycles.

However, in your case that won't address the whole problem: you presumably don't want your launch configuration to get recreated here since the user_data is actually unchanged. There is a different limitation here in how Terraform handles templates, which means that the rendered template will temporarily be "computed" when it's being re-created, and Terraform only realizes at apply time that the result is still the same, after it's already too late.

There is a workaround for this part too: after you make your change, create a plan with the additional option -target=template_file.userdata_app, which will cause Terraform to only re-create the template_file instance. You can then do a normal, un-targeted plan and Terraform should then be able to see that in fact the user_data value hasn't changed and thus skip recreating the launch configuration.

I'm sorry that you've got caught up here in the combination of two design problems currently present in Terraform. In the long run I think both of these issues will be fixed by the architectural change we've been working on in #4169, which will then make template_file be a new kind of object called a "data source" that will have a simpler lifecycle that avoids the need to "re-create" it when attributes change. In the mean time, I hope the workarounds above help you move past these problems for now.

@ashb
Copy link
Contributor Author

ashb commented Apr 7, 2016

However, in your case that won't address the whole problem: you presumably don't want your launch configuration to get recreated here since the user_data is actually unchanged

Yes, that was the thing that surprised me most. Thanks for the detail on how it works, and thanks for the work around - we'll give it a go.

@ashb ashb closed this as completed May 17, 2016
@ghost
Copy link

ghost commented Apr 25, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants