You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a Terraform file which sets up an AWS auto-scaling group and creates ECS tasks. These also sit behind a load balancer. When I run terraform apply for the first time, almost everything runs fine, but when it comes to creating the aws_ecs_service, I get this error:
* InvalidParameterException: Unable to assume role and validate the listeners configured on your load balancer. Please verify the role being passed has the proper permissions.
status code: 400, request id: []
If I re-apply a second time, then it works, and I can verify that everything is set up as expected in the AWS console. Seems like there is some sort of problem with the role I'm creating, and it not being recognised when this part of the deployment happens.
Here is a snapshot of the main .tf file I'm using (note: it's a work-in-progress so probably full of bad practice!):
/**
* AWS provider
*/
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
/**
* IAM role to allow ECS to assume control of EC2 instances
*/
resource "aws_iam_role" "ecs" {
name = "gocd-ecs"
path = "/"
assume_role_policy = "${file("iam-policies/ecs-role.json")}"
}
/**
* IAM role to allow control EC2
*/
resource "aws_iam_role_policy" "ecs" {
name = "gocd-ecs"
role = "${aws_iam_role.ecs.id}"
policy = "${file("iam-policies/ecs-role-policy.json")}"
}
/**
* IAM role for services
*/
resource "aws_iam_role" "gocd" {
name = "gocd-service"
path = "/"
assume_role_policy = "${file("iam-policies/service-role.json")}"
}
/**
* IAM policy for service roles
*/
resource "aws_iam_role_policy" "gocd" {
name = "gocd-service"
role = "${aws_iam_role.gocd.id}"
policy = "${file("iam-policies/service-role-policy.json")}"
}
/**
* Provides internal access to container ports
*/
resource "aws_security_group" "ecs" {
name = "gocd-ecs"
description = "Container Instance Allowed Ports"
ingress {
from_port = 1
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "gocd-ecs"
}
}
/**
* IAM profile to be used in auto-scaling launch configuration.
*/
resource "aws_iam_instance_profile" "ecs" {
name = "gocd-ecs"
path = "/"
roles = ["${aws_iam_role.ecs.name}"]
}
/**
* Launch configuration used by autoscaling group
*/
resource "aws_launch_configuration" "ecs" {
name = "gocd-ecs"
image_id = "ami-7948320e"
instance_type = "t2.micro"
security_groups = ["${aws_security_group.ecs.id}"]
iam_instance_profile = "${aws_iam_instance_profile.ecs.name}"
user_data = "${file("ecs.sh")}"
}
/**
* Autoscaling group.
*/
resource "aws_autoscaling_group" "ecs" {
name = "gocd-ecs"
availability_zones = ["eu-west-1a"]
launch_configuration = "${aws_launch_configuration.ecs.name}"
min_size = 1
max_size = 10
desired_capacity = 1
}
/**
* ECS cluster
*/
resource "aws_ecs_cluster" "gocd" {
name = "gocd"
}
/**
* Load balancer
*/
resource "aws_elb" "gocd" {
name = "gocd"
availability_zones = ["eu-west-1a"]
connection_draining = false
listener {
instance_port = 8153
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 60
target = "HTTP:8153/"
interval = 300
}
}
/**
*
*/
resource "aws_ecs_task_definition" "gocd" {
family = "gocd"
container_definitions = "${file("task-definitions/gocd.json")}"
}
/**
*
*/
resource "aws_ecs_service" "gocd" {
name = "gocd"
cluster = "${aws_ecs_cluster.gocd.id}"
task_definition = "${aws_ecs_task_definition.gocd.id}"
iam_role = "${aws_iam_role.gocd.id}"
desired_count = 1
load_balancer {
elb_name = "${aws_elb.gocd.id}"
container_name = "gocd"
container_port = 8153
}
}
The text was updated successfully, but these errors were encountered:
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
ghost
locked and limited conversation to collaborators
May 1, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have a Terraform file which sets up an AWS auto-scaling group and creates ECS tasks. These also sit behind a load balancer. When I run
terraform apply
for the first time, almost everything runs fine, but when it comes to creating theaws_ecs_service
, I get this error:If I re-apply a second time, then it works, and I can verify that everything is set up as expected in the AWS console. Seems like there is some sort of problem with the role I'm creating, and it not being recognised when this part of the deployment happens.
Here is a snapshot of the main
.tf
file I'm using (note: it's a work-in-progress so probably full of bad practice!):The text was updated successfully, but these errors were encountered: