Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Referencing launch template version in autoscaling resource fails on apply if lt is modified #34867

Open
PilotBob opened this issue Dec 11, 2023 · 8 comments
Labels
bug Addresses a defect in current functionality. service/autoscaling Issues and PRs that pertain to the autoscaling service.

Comments

@PilotBob
Copy link

PilotBob commented Dec 11, 2023

Terraform Core Version

1.6.5

AWS Provider Version

5.3.0

Affected Resource(s)

aws_autoscaling_group
aws_launch_template

Expected Behavior

Any change to the launch template should create a new version of the template. That plan seems to show it is going to modify the launch template in place, and change the version to null ??? It should be showing "know after apply" maybe?

Actual Behavior

The apply fails.

Relevant Error/Panic Output Snippet

When expanding the plan for
module.xxxxxxxx.module.xxxxxxxx-runner-group.aws_autoscaling_group.this[0]
to include new values learned so far during apply, provider
"registry.terraform.io/hashicorp/aws" produced an invalid new value for
.mixed_instances_policy[0].launch_template[0].launch_template_specification[0].version:
was cty.StringVal(""), but now cty.StringVal("2").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Terraform Configuration Files

data "aws_availability_zones" "available" {}

data "aws_ssm_parameter" "ami_id" {
  name = "/aws/service/ami-windows-latest/Windows_Server-2019-English-Core-ECS_Optimized/image_id"
}

locals {
  name   = "windows-runner-tf"
  region = "us-east-1"

  tags = {
    Name  = local.name
    Owner = "email@email.com"
    Team  = "TEAMNAME"
  }
}

module "windows-runner-group" {
  source = "terraform-aws-modules/autoscaling/aws"

  name = local.name

  vpc_zone_identifier       = var.private_subnets
  min_size                  = 0
  max_size                  = 3
  desired_capacity          = 1
  health_check_type         = "EC2"
  default_cooldown          = 30
  health_check_grace_period = 300
  wait_for_capacity_timeout = "30m"
  min_elb_capacity          = 1
  wait_for_elb_capacity     = 1
  metrics_granularity       = "1Minute"
  enable_monitoring         = false

  tags = local.tags

  launch_template_use_name_prefix = true
  update_default_version          = true

  image_id                   = data.aws_ssm_parameter.ami_id.value
  use_mixed_instances_policy = true
  mixed_instances_policy = {
    instances_distribution = {
      on_demand_allocation_strategy            = "lowest-price"
      on_demand_base_capacity                  = 0
      on_demand_percentage_above_base_capacity = 0
      spot_allocation_strategy                 = "price-capacity-optimized"
    }

    override = [
      {
        instance_requirements = {
          memory_gib_per_vcpu = {
            min = 2
          }
          memory_mib = {
            min = 8192
          }
          vcpu_count = {
            min = 2
          }
          generation = "current"
        }
    }]
  }

  security_groups = var.instance_security_group_ids
  block_device_mappings = [
    {
      device_name = "/dev/sda1"
      ebs = {
        delete_on_termination = true
        volume_size           = 300
        volume_type           = "gp3"
      }
    }
  ]

  user_data = base64encode("<powershell>Write-Host 'Hello Terraform'</powershell>")

  create_iam_instance_profile = false
  iam_instance_profile_arn    = var.iam_instance_profile_arn

  initial_lifecycle_hooks = [
    {
      name                 = "patching-reboot"
      lifecycle_transition = "autoscaling:EC2_INSTANCE_LAUNCHING"
      heartbeat_timeout    = 900
      default_result       = "ABANDON"
    }
  ]
}

resource "aws_sns_topic" "autoscaling_notifications" {
  name = "runner-notifications"
}

resource "aws_sns_topic_subscription" "email_notifications" {
  topic_arn = aws_sns_topic.autoscaling_notifications.arn
  protocol  = "email"
  endpoint  = "email@email.com"
}

resource "aws_autoscaling_notification" "runner_notifications" {
  group_names = [
    module.windows-runner-group.autoscaling_group_name
  ]

  notifications = [
    "autoscaling:EC2_INSTANCE_LAUNCH",
    "autoscaling:EC2_INSTANCE_LAUNCH_ERROR",
    "autoscaling:EC2_INSTANCE_TERMINATE",
    "autoscaling:EC2_INSTANCE_TERMINATE_ERROR",
  ]
  topic_arn = aws_sns_topic.autoscaling_notifications.arn
}

This also fails if I don't use the autoscalinggroup module and use the aws_autoscaling_group resource directly.

Steps to Reproduce

  1. apply the configuration
  2. modify the userdata or SSM path to a different ami like change Core to Full
  3. plan and apply the changes

Debug Output

No response

Panic Output

No response

Important Factoids

No response

References

No response

Would you like to implement a fix?

None

@PilotBob PilotBob added the bug Addresses a defect in current functionality. label Dec 11, 2023
Copy link

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@github-actions github-actions bot added service/autoscaling Issues and PRs that pertain to the autoscaling service. service/ec2 Issues and PRs that pertain to the ec2 service. service/sns Issues and PRs that pertain to the sns service. service/ssm Issues and PRs that pertain to the ssm service. labels Dec 11, 2023
@terraform-aws-provider terraform-aws-provider bot added the needs-triage Waiting for first response or review from a maintainer. label Dec 11, 2023
@PilotBob
Copy link
Author

PilotBob commented Dec 12, 2023

It appears adding

launch_template_version = "$Latest"

fixes this. So, I will close this issue. It would be nice if the autoscaling module documentation was a bit more verbose.

@terraform-aws-provider terraform-aws-provider bot removed the needs-triage Waiting for first response or review from a maintainer. label Dec 12, 2023
@PilotBob
Copy link
Author

OK, I am going to reopen this.

While putting $Latest as the version prevents the apply from erroring this will no longer will trigger an instance refresh. Because nothing in the ASG changes.

I have followed the instance_refresh block docs for the aws_autoscaling_group resource and specified the launch template resource version in the mixed_instances configuration of the asg resource.

I have even added a depends_on in the aws_autoscaling_group to try to prevent this.

The plan shows that it is going to change the lt version in the ASG to null. When I apply the plan, the lt is modified, but then it fails to modify the asg because lt version null isn't valid.

~ mixed_instances_policy {
          ~ launch_template {
              ~ launch_template_specification {
                  - version              = "3" -> null
                    # (2 unchanged attributes hidden)
                }

                # (1 unchanged block hidden)
            }

            # (1 unchanged block hidden)
        }

@PilotBob PilotBob reopened this Dec 14, 2023
@PilotBob PilotBob changed the title [Bug]: Changes to launch template always fail with apply didn't match plan [Bug]: Referencing launch template version in autoscaling resource fails on apply if lt is modified Dec 14, 2023
@herrbpl
Copy link

herrbpl commented Jan 16, 2024

Ran into this today.
I was able to get bypass this by first doing terraform/terragrunt apply -refresh-only and then then plan/apply as usual

@egorksv
Copy link

egorksv commented Jan 27, 2024

Same with mixed instances policy:

  mixed_instances_policy {

    launch_template {
      launch_template_specification {
        launch_template_id = aws_launch_template.jenkins_ec2_fleet.id
        version            = aws_launch_template.jenkins_ec2_fleet.latest_version
      }
}

I have launch template configured to pick up the latest AMI as they are getting baked, and this configuration does not work.

@project0
Copy link
Contributor

project0 commented Feb 1, 2024

I suddenly have the same problem, just updated the aws terraform provider from 5.9.0 to 5.34.0.
Looks like something in between has been updated what caused a change of behavior.

@justinretzolk justinretzolk removed service/ec2 Issues and PRs that pertain to the ec2 service. service/sns Issues and PRs that pertain to the sns service. service/ssm Issues and PRs that pertain to the ssm service. labels Mar 21, 2024
@jeremyweber72
Copy link

I ran into this issue as well, my work around was to change the trigger for instance refresh to be a tag. Not ideal, but will work for now.

@PeiYee88
Copy link

adding

depends_on = [aws_launch_template.this]

in

resource "aws_autoscaling_group" "this" {}

is working for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Addresses a defect in current functionality. service/autoscaling Issues and PRs that pertain to the autoscaling service.
Projects
None yet
Development

No branches or pull requests

7 participants