-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ECS Cluster : Cycle when using capacity_providers #12739
Comments
I think the It's easy to create a cycle when using capacity providers, since |
@meriouma I've had the same issue. My workaround was to modify my launch config block as such: resource "aws_launch_configuration" "ecs" {
image_id = data.aws_ami.ecs-optimized-ami.image_id
instance_type = "t3a.medium"
security_groups = var.security_group_ids_for_ec2_instances
iam_instance_profile = aws_iam_instance_profile.ecs.name
user_data_base64 = base64encode(templatefile("${path.module}/ecs-user-data.sh", {
- cluster_name = aws_ecs_cluster.cluster.name
+ cluster_name = local.hyphenized_name
}))
lifecycle {
create_before_destroy = true
}
} This works since the ECS cluster name does not contain a random uid when it is created, such as an ASG. If you recreate an ASG, you will get a new ARN, but for an ECS cluster, the name/arn is predictable/the same. Hope this can help until the provider is fixed. |
This workaround works - thanks for that - although it does violate one principle of good IAC because it creates a pair of resources which are dependent on one another without the dependency being represented in the code or understood by Terraform. It means Terraform might be more prone to creating badly ordered plans. For that reason, the workaround is acceptable temporarily but I think it does ultimately need to be fixed rather than just worked around. |
Also seeing issues with this around apply vs destroy ordering. If you set it up to apply in this order: Destroy is typically the opposite: The ecs cluster destroy hangs because the container instances are still running from the auto scale group: Error: Error deleting ECS cluster: ClusterContainsContainerInstancesException: The Cluster cannot be deleted while Container Instances are active or draining. |
Closed via #22672 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform Version
Affected Resource(s)
Terraform Configuration Files
Expected Behavior
Apply works.
Actual Behavior
Steps to Reproduce
terraform apply
The text was updated successfully, but these errors were encountered: