You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
resource"aws_elb""demo" {
name="${var.tf_prefix}elb"# The same availability zone as our instanceavailability_zones=["${aws_instance.demo.availability_zone}"]
# The instance is registered automaticallyinstances=["${aws_instance.demo.id}"]
#cross_zone_load_balancing=trueidle_timeout=300connection_draining_timeout=0listener {
instance_port=80instance_protocol="HTTP"lb_port=80lb_protocol="HTTP"
}
health_check {
healthy_threshold=2unhealthy_threshold=5timeout=3interval=10target="TCP:80"
}
tags {
name="${var.tf_prefix}elb-tag"
}
}
Debug Output
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
aws_elb.demo: Failure adding new or updated ELB listeners: DuplicateListener: A Listener already exists
status code: 400, request id: 05670084-00e2-4e41-9992-781af5028a8f
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
%
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_security_group.demo: Refreshing state... (ID: sg-c6e3eb27)
aws_instance.demo: Refreshing state... (ID: i-4b4ee7f7)
aws_elb.demo: Refreshing state... (ID: TVTF-elb)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the terraform show command.
State path:
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_security_group.demo: Refreshing state... (ID: sg-c6e3eb27)
aws_instance.demo: Refreshing state... (ID: i-4b4ee7f7)
aws_elb.demo: Refreshing state... (ID: TVTF-elb)
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.
%
Expected Behavior
ELB should be created without timeouting first
Actual Behavior
ELB creation fails after 60s timeout and succeeds if updated a while later
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
% terraform plan
% terraform apply
% terraform plan
% terraform apply
% terraform plan
Important Factoids
This is Eucalyptus 4.4. ELB creation just takes longer with it because ELB is an instance running service image. If this image is not already in a node which is going to run the ELB the fetch from backend storage might take 3-5 minutes + about a minute to start the image.
Everything works if I run terraform apply 1-2 minutes later when image got up. Then it just changes the missing parameters.
References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
ghost
locked and limited conversation to collaborators
Apr 14, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Terraform Version
Terraform v0.9.2
Affected Resource(s)
aws_elb
Terraform Configuration Files
Debug Output
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
aws_elb.demo
availability_zones.#: ""
connection_draining: "false"
connection_draining_timeout: "0"
cross_zone_load_balancing: "true"
dns_name: ""
health_check.#: "1"
health_check.0.healthy_threshold: "2"
health_check.0.interval: "10"
health_check.0.target: "TCP:80"
health_check.0.timeout: "3"
health_check.0.unhealthy_threshold: "5"
idle_timeout: "300"
instances.#: ""
internal: ""
listener.#: "1"
listener.3057123346.instance_port: "80"
listener.3057123346.instance_protocol: "HTTP"
listener.3057123346.lb_port: "80"
listener.3057123346.lb_protocol: "HTTP"
listener.3057123346.ssl_certificate_id: ""
name: "TVTF-elb"
security_groups.#: ""
source_security_group: ""
source_security_group_id: ""
subnets.#: ""
tags.%: "1"
tags.name: "TVTF-elb-tag"
zone_id: ""
aws_instance.demo
ami: "emi-fc9afb55"
associate_public_ip_address: ""
availability_zone: ""
ebs_block_device.#: ""
ephemeral_block_device.#: ""
instance_state: ""
instance_type: "t1.micro"
ipv6_addresses.#: ""
key_name: "tomvaini"
network_interface_id: ""
placement_group: ""
private_dns: ""
private_ip: ""
public_dns: ""
public_ip: ""
root_block_device.#: ""
security_groups.#: "1"
security_groups.77372583: "TVTF-sg"
source_dest_check: "true"
subnet_id: ""
tags.%: "1"
tags.Name: "TVTF-instance-tag"
tenancy: ""
vpc_security_group_ids.#: ""
aws_security_group.demo
description: "TVTF-security-group"
egress.#: ""
ingress.#: "3"
ingress.1799340084.cidr_blocks.#: "1"
ingress.1799340084.cidr_blocks.0: "0.0.0.0/0"
ingress.1799340084.from_port: "-1"
ingress.1799340084.ipv6_cidr_blocks.#: "0"
ingress.1799340084.protocol: "icmp"
ingress.1799340084.security_groups.#: "0"
ingress.1799340084.self: "false"
ingress.1799340084.to_port: "-1"
ingress.2214680975.cidr_blocks.#: "1"
ingress.2214680975.cidr_blocks.0: "0.0.0.0/0"
ingress.2214680975.from_port: "80"
ingress.2214680975.ipv6_cidr_blocks.#: "0"
ingress.2214680975.protocol: "tcp"
ingress.2214680975.security_groups.#: "0"
ingress.2214680975.self: "false"
ingress.2214680975.to_port: "80"
ingress.2541437006.cidr_blocks.#: "1"
ingress.2541437006.cidr_blocks.0: "0.0.0.0/0"
ingress.2541437006.from_port: "22"
ingress.2541437006.ipv6_cidr_blocks.#: "0"
ingress.2541437006.protocol: "tcp"
ingress.2541437006.security_groups.#: "0"
ingress.2541437006.self: "false"
ingress.2541437006.to_port: "22"
name: "TVTF-sg"
owner_id: ""
vpc_id: ""
Plan: 3 to add, 0 to change, 0 to destroy.
% terraform apply
aws_security_group.demo: Creating...
description: "" => "TVTF-security-group"
egress.#: "" => ""
ingress.#: "" => "3"
ingress.1799340084.cidr_blocks.#: "" => "1"
ingress.1799340084.cidr_blocks.0: "" => "0.0.0.0/0"
ingress.1799340084.from_port: "" => "-1"
ingress.1799340084.ipv6_cidr_blocks.#: "" => "0"
ingress.1799340084.protocol: "" => "icmp"
ingress.1799340084.security_groups.#: "" => "0"
ingress.1799340084.self: "" => "false"
ingress.1799340084.to_port: "" => "-1"
ingress.2214680975.cidr_blocks.#: "" => "1"
ingress.2214680975.cidr_blocks.0: "" => "0.0.0.0/0"
ingress.2214680975.from_port: "" => "80"
ingress.2214680975.ipv6_cidr_blocks.#: "" => "0"
ingress.2214680975.protocol: "" => "tcp"
ingress.2214680975.security_groups.#: "" => "0"
ingress.2214680975.self: "" => "false"
ingress.2214680975.to_port: "" => "80"
ingress.2541437006.cidr_blocks.#: "" => "1"
ingress.2541437006.cidr_blocks.0: "" => "0.0.0.0/0"
ingress.2541437006.from_port: "" => "22"
ingress.2541437006.ipv6_cidr_blocks.#: "" => "0"
ingress.2541437006.protocol: "" => "tcp"
ingress.2541437006.security_groups.#: "" => "0"
ingress.2541437006.self: "" => "false"
ingress.2541437006.to_port: "" => "22"
name: "" => "TVTF-sg"
owner_id: "" => ""
vpc_id: "" => ""
aws_security_group.demo: Creation complete (ID: sg-c6e3eb27)
aws_instance.demo: Creating...
ami: "" => "emi-fc9afb55"
associate_public_ip_address: "" => ""
availability_zone: "" => ""
ebs_block_device.#: "" => ""
ephemeral_block_device.#: "" => ""
instance_state: "" => ""
instance_type: "" => "t1.micro"
ipv6_addresses.#: "" => ""
key_name: "" => "tomvaini"
network_interface_id: "" => ""
placement_group: "" => ""
private_dns: "" => ""
private_ip: "" => ""
public_dns: "" => ""
public_ip: "" => ""
root_block_device.#: "" => ""
security_groups.#: "" => "1"
security_groups.77372583: "" => "TVTF-sg"
source_dest_check: "" => "true"
subnet_id: "" => ""
tags.%: "" => "1"
tags.Name: "" => "TVTF-instance-tag"
tenancy: "" => ""
vpc_security_group_ids.#: "" => ""
aws_instance.demo: Still creating... (10s elapsed)
aws_instance.demo: Still creating... (20s elapsed)
aws_instance.demo: Still creating... (30s elapsed)
aws_instance.demo: Still creating... (40s elapsed)
aws_instance.demo: Still creating... (50s elapsed)
aws_instance.demo: Still creating... (1m0s elapsed)
aws_instance.demo: Still creating... (1m10s elapsed)
aws_instance.demo: Still creating... (1m20s elapsed)
aws_instance.demo: Still creating... (1m30s elapsed)
aws_instance.demo: Still creating... (1m40s elapsed)
aws_instance.demo: Still creating... (1m50s elapsed)
aws_instance.demo: Provisioning with 'file'...
aws_instance.demo: Provisioning with 'remote-exec'...
aws_instance.demo (remote-exec): Connecting to remote host via SSH...
aws_instance.demo (remote-exec): Host: 1.2.3.4
aws_instance.demo (remote-exec): User: root
aws_instance.demo (remote-exec): Password: false
aws_instance.demo (remote-exec): Private key: false
aws_instance.demo (remote-exec): SSH Agent: true
aws_instance.demo (remote-exec): Connected!
aws_instance.demo (remote-exec): 10:06:30 up 0 min, 1 user, load average: 0.79, 0.23, 0.08
aws_instance.demo: Still creating... (2m0s elapsed)
aws_instance.demo: Still creating... (2m10s elapsed)
aws_instance.demo (remote-exec): Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
aws_instance.demo: Still creating... (2m20s elapsed)
aws_instance.demo: Creation complete (ID: i-4b4ee7f7)
aws_elb.demo: Creating...
availability_zones.#: "" => "1"
availability_zones.2812906845: "" => "dev_1"
connection_draining: "" => "false"
connection_draining_timeout: "" => "0"
cross_zone_load_balancing: "" => "true"
dns_name: "" => ""
health_check.#: "" => "1"
health_check.0.healthy_threshold: "" => "2"
health_check.0.interval: "" => "10"
health_check.0.target: "" => "TCP:80"
health_check.0.timeout: "" => "3"
health_check.0.unhealthy_threshold: "" => "5"
idle_timeout: "" => "300"
instances.#: "" => "1"
instances.3365339903: "" => "i-4b4ee7f7"
internal: "" => ""
listener.#: "" => "1"
listener.3057123346.instance_port: "" => "80"
listener.3057123346.instance_protocol: "" => "HTTP"
listener.3057123346.lb_port: "" => "80"
listener.3057123346.lb_protocol: "" => "HTTP"
listener.3057123346.ssl_certificate_id: "" => ""
name: "" => "TVTF-elb"
security_groups.#: "" => ""
source_security_group: "" => ""
source_security_group_id: "" => ""
subnets.#: "" => ""
tags.%: "" => "1"
tags.name: "" => "TVTF-elb-tag"
zone_id: "" => ""
aws_elb.demo: Still creating... (10s elapsed)
aws_elb.demo: Still creating... (20s elapsed)
aws_elb.demo: Still creating... (30s elapsed)
aws_elb.demo: Still creating... (40s elapsed)
aws_elb.demo: Still creating... (50s elapsed)
aws_elb.demo: Still creating... (1m0s elapsed)
Error applying plan:
1 error(s) occurred:
aws_elb.demo: 1 error(s) occurred:
aws_elb.demo: Failure adding new or updated ELB listeners: DuplicateListener: A Listener already exists
status code: 400, request id: 05670084-00e2-4e41-9992-781af5028a8f
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
%
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_security_group.demo: Refreshing state... (ID: sg-c6e3eb27)
aws_instance.demo: Refreshing state... (ID: i-4b4ee7f7)
aws_elb.demo: Refreshing state... (ID: TVTF-elb)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
~ aws_elb.demo
cross_zone_load_balancing: "false" => "true"
health_check.0.healthy_threshold: "3" => "2"
health_check.0.interval: "30" => "10"
health_check.0.timeout: "5" => "3"
health_check.0.unhealthy_threshold: "3" => "5"
idle_timeout: "60" => "300"
instances.#: "0" => "1"
instances.3365339903: "" => "i-4b4ee7f7"
Plan: 0 to add, 1 to change, 0 to destroy.
%
% terraform apply
aws_security_group.demo: Refreshing state... (ID: sg-c6e3eb27)
aws_instance.demo: Refreshing state... (ID: i-4b4ee7f7)
aws_elb.demo: Refreshing state... (ID: TVTF-elb)
aws_elb.demo: Modifying... (ID: TVTF-elb)
cross_zone_load_balancing: "false" => "true"
health_check.0.healthy_threshold: "3" => "2"
health_check.0.interval: "30" => "10"
health_check.0.timeout: "5" => "3"
health_check.0.unhealthy_threshold: "3" => "5"
idle_timeout: "60" => "300"
instances.#: "0" => "1"
instances.3365339903: "" => "i-4b4ee7f7"
aws_elb.demo: Modifications complete (ID: TVTF-elb)
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the
terraform show
command.State path:
% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_security_group.demo: Refreshing state... (ID: sg-c6e3eb27)
aws_instance.demo: Refreshing state... (ID: i-4b4ee7f7)
aws_elb.demo: Refreshing state... (ID: TVTF-elb)
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.
%
Expected Behavior
ELB should be created without timeouting first
Actual Behavior
ELB creation fails after 60s timeout and succeeds if updated a while later
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
% terraform plan
% terraform apply
% terraform plan
% terraform apply
% terraform plan
Important Factoids
This is Eucalyptus 4.4. ELB creation just takes longer with it because ELB is an instance running service image. If this image is not already in a node which is going to run the ELB the fetch from backend storage might take 3-5 minutes + about a minute to start the image.
Everything works if I run terraform apply 1-2 minutes later when image got up. Then it just changes the missing parameters.
References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
The text was updated successfully, but these errors were encountered: