Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValidationError: Trying to update too many Load Balancers/Target Groups at once. The limit is 10 #11301

Closed
Itiho opened this issue Dec 15, 2019 · 21 comments · Fixed by #26654
Closed
Labels
bug Addresses a defect in current functionality. service/autoscaling Issues and PRs that pertain to the autoscaling service. service/elbv2 Issues and PRs that pertain to the elbv2 service.
Milestone

Comments

@Itiho
Copy link

Itiho commented Dec 15, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

terraform -v
Terraform v0.12.18

  • provider.aws v2.42.0

Affected Resource(s)

  • aws_autoscaling_attachment

Terraform Configuration Files

provider "aws" {
  region  = "us-east-1"
  version = "~>2.42.0"
}

variable "ami_id" {
    type = string
    default = "ami-055c10ae78f3a58a2"
    #default = "ami-028be67c2aa2f1ce1"
}

variable "vpc_zone_identifier" {
  default = ["subnet-04683ec0b1b1992fc"] #my test subnet
}

variable "vpc_id" {
  default = "vpc-031156f8fcca6f558" #my test vpc
}

variable "ports" {
  type = list(string)
  default = [
    "80",
    "81",
    "82",
    "83",
    "84",
    "85",
    "86",
    "87",
    "88",
    "89",
    "90",
    "91",
    "91",
    "93",
    "94",
    "95",
    "96",
    "97",
    "98",
    "99",
  ]
}


resource "aws_launch_configuration" "launch_config" {
  name_prefix                 = "lc-teste"
  image_id                    = var.ami_id
  instance_type               = "t2.micro"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_autoscaling_group" "as_group" {
  name                      = "${aws_launch_configuration.launch_config.name}-asg"
  launch_configuration      = aws_launch_configuration.launch_config.name
  max_size                  = "1"
  min_size                  = "1"
  desired_capacity          = "1"
  vpc_zone_identifier       = var.vpc_zone_identifier
}


resource "aws_lb" "lb" {
  name                             = "load-balance"
  subnets                          = var.vpc_zone_identifier
  load_balancer_type               = "network"
}


resource "aws_lb_target_group" "lb_target_group" {
  count                = length(var.ports)
  port                 = var.ports[count.index]
  vpc_id               = var.vpc_id
  protocol             = "TCP"
  target_type          = "instance"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_lb_listener" "lb_listener" {
  count             = length(var.ports)
  load_balancer_arn = aws_lb.lb.arn
  port              = var.ports[count.index]
  protocol          = "TCP"

  default_action {
    target_group_arn = aws_lb_target_group.lb_target_group[count.index]["arn"]
    type             = "forward"
  }

  lifecycle {
    create_before_destroy = false
  }
}

resource "aws_autoscaling_attachment" "asg_attachment" {
  count                  = length(var.ports)
  autoscaling_group_name = aws_autoscaling_group.as_group.name
  alb_target_group_arn   = aws_lb_target_group.lb_target_group[count.index]["arn"]
}

Expected Behavior

Terraform should create the NLB, listerners[20], targets groups[20], autoscaling_group, launch configuration and autoscaling_attachment[20]

Actual Behavior

Terraform no complete
Error: Failure attaching AutoScaling Group lc-teste20191214225150423700000009-asg with ALB Target Group: arn:aws:elasticloadbalancing:us-east-1:106431551699:targetgroup/tf-2019121422515268710000000a/080dbbc407f8c918: ValidationError: Trying to update too many Load Balancers/Target Groups at once. The limit is 10
	status code: 400, request id: 753608db-1ec4-11ea-b6ba-67bd3222d77c

  on main.tf line 102, in resource "aws_autoscaling_attachment" "asg_attachment":
 102: resource "aws_autoscaling_attachment" "asg_attachment" {

Error: Failure attaching AutoScaling Group lc-teste20191214225150423700000009-asg with ALB Target Group: arn:aws:elasticloadbalancing:us-east-1:106431551699:targetgroup/tf-20191214225149124400000005/bcb4ef6045a6a129: ValidationError: Trying to update too many Load Balancers/Target Groups at once. The limit is 10
	status code: 400, request id: 7652ea9a-1ec4-11ea-a375-49e242c6fa68

  on main.tf line 102, in resource "aws_autoscaling_attachment" "asg_attachment":
 102: resource "aws_autoscaling_attachment" "asg_attachment" {

Steps to Reproduce

  1. terraform apply

Important Factoids

References

@ghost ghost added service/autoscaling Issues and PRs that pertain to the autoscaling service. service/elbv2 Issues and PRs that pertain to the elbv2 service. labels Dec 15, 2019
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Dec 15, 2019
@n3ph
Copy link
Contributor

n3ph commented Dec 15, 2019

@Itiho
Copy link
Author

Itiho commented Dec 16, 2019

Hello n3ph

My problem is start +10 times attach (autoscaling group in target group) in small período

@n3ph
Copy link
Contributor

n3ph commented Dec 16, 2019

I wanted to state that this is a restriction of the AWS API backend.
Its even documented in the SDK.

@Itiho
Copy link
Author

Itiho commented Dec 16, 2019

I'm sorry but I don't understand GO language very much.

This function says that for each call you can only attach a maximum of 10 target groups in 1 scalable group. Am I right?

The error here is that I can't call the aws_autoscaling_attachment function multiple times in a short time.

The end result must be the same.

If I use the following syntax it works:

resource "aws_autoscaling_group" "as_group" {
  name                      = "${aws_launch_configuration.launch_config.name}-asg"
  launch_configuration      = aws_launch_configuration.launch_config.name
  max_size                  = "1"
  min_size                  = "1"
  desired_capacity          = "1"
  vpc_zone_identifier       = var.vpc_zone_identifier
  target_group_arns        = aws_lb_target_group.lb_target_group.*.arn
}


resource "aws_lb_target_group" "lb_target_group" {
  count                = length(var.ports)
  port                 = var.ports[count.index]
  vpc_id               = var.vpc_id
  protocol             = "TCP"
  target_type          = "instance"

  lifecycle {
    create_before_destroy = true
  }
}

Passing all target-group ARNs (over 10) works.
target_group_arns = aws_lb_target_group.lb_target_group.*.arn

@justinretzolk
Copy link
Member

Hey y'all 👋 Thank you for taking the time to file this issue and for the additional discussion around it. Given that there's been a number of AWS provider releases since the last update, can anyone confirm whether you're still experiencing this issue?

@justinretzolk justinretzolk added waiting-response Maintainers are waiting on response from community or contributor. and removed needs-triage Waiting for first response or review from a maintainer. labels Nov 18, 2021
@sskalnik
Copy link

Still occurring with 3.74.3

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Feb 25, 2022
@justinretzolk justinretzolk added the bug Addresses a defect in current functionality. label Feb 25, 2022
@nisalupendra
Copy link

nisalupendra commented Mar 1, 2022

Still happening 0.15

@gtfortytwo
Copy link

And still happening with v4.12.1

@chernetskyi
Copy link

Was able to avoid the issue by placing time_sleep between each 10 ALB attachments. Use depends_on to make time_sleep resource dependent on the first 10 attachments, then the next ones should be dependent on that time_sleep, etc. For me, 30 seconds of wait time was more than enough (haven't tested less). Use both create_duration and destroy_duration fields, as the issue occurs during terraform destroy as well.

@gtfortytwo
Copy link

The solution suggested by @chernetskyi seems to be a functional workaround (thanks for the tip!), but it's made doubly ugly by the fact that depends_on has to be a "static list expression", meaning you can't iterate a structure with something like a for expression to define the dependencies - they have to be called out individually. Already had to split the structures into batches. Blyech.

@luis-a-sanchez
Copy link

Still happening in v4.18.0.

@sherifkayad
Copy link

@chernetskyi can you please share your solution in more detail? Currently I am blocked by the same issue

@chernetskyi
Copy link

@chernetskyi can you please share your solution in more detail? Currently I am blocked by the same issue

You need $(n - 1)\text{ div }10$ time_sleep resources, where $n$ is the number of aws_autoscaling_attachment resources.

In case you have 12 aws_autoscaling_attachment resources:

resource "time_sleep" "first10attachments" {
  depends_on = [
    aws_autoscaling_attachment.first,
    aws_autoscaling_attachment.second,
    aws_autoscaling_attachment.third,
    aws_autoscaling_attachment.fourth,
    aws_autoscaling_attachment.fifth,
    aws_autoscaling_attachment.sixth,
    aws_autoscaling_attachment.seventh,
    aws_autoscaling_attachment.eighth,
    aws_autoscaling_attachment.ninth,
    aws_autoscaling_attachment.tenth,
  ]

  create_duration  = "30s"
  destroy_duration = "30s"
}

resource "time_sleep" "second10attachments" {
  depends_on = [
    aws_autoscaling_attachment.eleventh,
    aws_autoscaling_attachment.twelfth,
  ]

  create_duration  = "30s"
  destroy_duration = "30s"
}

@sherifkayad
Copy link

@chernetskyi thanks for sharing 👍.. what if I have count set and basically my attachments are iterated over dynamically? .. is there a variant of your solution that would work with counts? .. as far as I know depends_on can't take something like aws_autoscaling_attachment.my_thing[count.index]

@chernetskyi
Copy link

@chernetskyi thanks for sharing +1.. what if I have count set and basically my attachments are iterated over dynamically? .. is there a variant of your solution that would work with counts? .. as far as I know depends_on can't take something like aws_autoscaling_attachment.my_thing[count.index]

Then you should limit your resources with count to 10 attachments per one and depend on the resource with 10 attachments.

@sherifkayad
Copy link

I attempted to do that and I ended up with multiple dependencies and really ugly code. I think the only reasonable path forward is to fix this from the provider side and add internal batching!

@sherifkayad
Copy link

sherifkayad commented Jul 20, 2022

For me what worked was adding a count-dependent wait for creation / destruction (I know that the solution looks ugly but at least I didn't split the list into chunks nor I had to add the time_sleep).

resource "aws_autoscaling_attachment" "my_asg_attachment" {
  count = length(local.my_local_list)

  autoscaling_group_name = var.workers_asg_name
  lb_target_group_arn    = aws_lb_target_group.my_nlb_tg[count.index].arn

  provisioner "local-exec" {
    interpreter = ["bash", "-c"]
    command     = "echo \"waiting for $(( 30 + 2 * ${count.index} )) seconds .. \" && sleep $(( 30 + 2 * ${count.index} ))"
  }
  provisioner "local-exec" {
    when = destroy
    interpreter = ["bash", "-c"]
    command     = "echo \"waiting for $(( 30 + 2 * ${count.index} )) seconds .. \" && sleep $(( 30 + 2 * ${count.index} ))"
  }
}

Note that I have a list creating / may destroy ~21 attachments. It might be the case that for bigger lists, the values of the wait / sleep need to be tweaked.

@lurcio
Copy link
Contributor

lurcio commented Sep 5, 2022

I encountered this too. I couldn't see a nice way to batch requests, but I have a PR (#26654) to add retries when creating or deleting aws_autoscaling_attachment resources. This fixes the issue for me.

@github-actions github-actions bot added this to the v4.31.0 milestone Sep 9, 2022
@github-actions
Copy link

This functionality has been released in v4.31.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@juliao-faurecia
Copy link

still not working for me
Terraform v1.1.7
on darwin_amd64

  • provider registry.terraform.io/hashicorp/aws v4.34.0

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 13, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/autoscaling Issues and PRs that pertain to the autoscaling service. service/elbv2 Issues and PRs that pertain to the elbv2 service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.