Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Diffs didn't match during apply for aws_elb #639

Closed
sarahhodne opened this issue Dec 10, 2014 · 9 comments
Closed

Diffs didn't match during apply for aws_elb #639

sarahhodne opened this issue Dec 10, 2014 · 9 comments

Comments

@sarahhodne
Copy link
Contributor

I've been following along the "Getting Started" guide for Atlas, and when I got to the last terraform apply on this page (in case the pages change, that's when you add the security group and switch the AMI for the instance to the Atlas artifact), I got this error:

λ terraform apply
atlas_artifact.web: Refreshing state... (ID: us-east-1:ami-32690e5a)
aws_instance.web.1: Refreshing state... (ID: i-24f8e5c5)
aws_instance.web.0: Refreshing state... (ID: i-27f8e5c6)
aws_elb.web: Refreshing state... (ID: atlas-example-elb)
aws_instance.web.1: Destroying...
aws_instance.web.0: Destroying...
aws_security_group.allow_all: Creating...
  description:                 "" => "Allow all inbound traffic"
  ingress.#:                   "" => "1"
  ingress.0.cidr_blocks.#:     "" => "1"
  ingress.0.cidr_blocks.0:     "" => "0.0.0.0/0"
  ingress.0.from_port:         "" => "0"
  ingress.0.protocol:          "" => "tcp"
  ingress.0.security_groups.#: "" => "0"
  ingress.0.self:              "" => "0"
  ingress.0.to_port:           "" => "65535"
  name:                        "" => "allow_all"
  owner_id:                    "" => "<computed>"
  vpc_id:                      "" => "<computed>"
aws_security_group.allow_all: Creation complete
aws_instance.web.0: Destruction complete
aws_instance.web.1: Destruction complete
aws_instance.web.1: Creating...
  ami:               "" => "ami-32690e5a"
  availability_zone: "" => "<computed>"
  instance_type:     "" => "t1.micro"
  key_name:          "" => "<computed>"
  private_dns:       "" => "<computed>"
  private_ip:        "" => "<computed>"
  public_dns:        "" => "<computed>"
  public_ip:         "" => "<computed>"
  security_groups.#: "" => "1"
  security_groups.0: "" => "allow_all"
  subnet_id:         "" => "<computed>"
  tenancy:           "" => "<computed>"
aws_instance.web.0: Creating...
  ami:               "" => "ami-32690e5a"
  availability_zone: "" => "<computed>"
  instance_type:     "" => "t1.micro"
  key_name:          "" => "<computed>"
  private_dns:       "" => "<computed>"
  private_ip:        "" => "<computed>"
  public_dns:        "" => "<computed>"
  public_ip:         "" => "<computed>"
  security_groups.#: "" => "1"
  security_groups.0: "" => "allow_all"
  subnet_id:         "" => "<computed>"
  tenancy:           "" => "<computed>"
aws_instance.web.0: Creation complete
aws_instance.web.1: Creation complete
Error applying plan:

aws_elb.web: diffs didn't match during apply. This is a bug with the resource provider, please report a bug.

My current Terraform file:

variable "atlas_token" {}
variable "aws_access_key" {}
variable "aws_secret_key" {}

provider "atlas" {
  token = "${var.atlas_token}"
}

provider "aws" {
  access_key = "${var.aws_access_key}"
  secret_key = "${var.aws_secret_key}"
  region = "us-east-1"
}

resource "atlas_artifact" "web" {
  name = "henrikhodne/atlas-example"
  type = "ami"
}

resource "aws_elb" "web"{
  name = "atlas-example-elb"
  availability_zones = ["${aws_instance.web.*.availability_zone}"]

  listener {
    instance_port = 80
    instance_protocol = "http"
    lb_port = 80
    lb_protocol = "http"
  }

  instances = ["${aws_instance.web.*.id}"]
}

resource "aws_security_group" "allow_all" {
  name = "allow_all"
  description = "Allow all inbound traffic"

  ingress {
    from_port = 0
    to_port = 65535
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "web" {
  instance_type = "t1.micro"
  ami = "${atlas_artifact.web.metadata_full.region-us-east-1}"
  security_groups = ["${aws_security_group.allow_all.name}"]

  count = 2
}

The diff from the latest terraform apply is the addition of the aws_security_group (and corresponding field in aws_instance), and changing the aws_instance.web.ami from "ami-" to the interpolation.

@sarahhodne
Copy link
Contributor Author

Here's the terraform plan output as well (from before the apply that errored):

λ terraform plan
Refreshing Terraform state prior to plan...

atlas_artifact.web: Refreshing state... (ID: us-east-1:ami-32690e5a)
aws_instance.web.1: Refreshing state... (ID: i-24f8e5c5)
aws_instance.web.0: Refreshing state... (ID: i-27f8e5c6)
aws_elb.web: Refreshing state... (ID: travis-lite-elb)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

~ aws_elb.web
    availability_zones.#: "2" => "<computed>"

-/+ aws_instance.web.0
    ami:               "ami-408c7f28" => "ami-32690e5a" (forces new resource)
    availability_zone: "us-east-1a" => "<computed>"
    instance_type:     "t1.micro" => "t1.micro"
    key_name:          "" => "<computed>"
    private_dns:       "ip-10-28-110-68.ec2.internal" => "<computed>"
    private_ip:        "10.28.110.68" => "<computed>"
    public_dns:        "ec2-54-145-139-201.compute-1.amazonaws.com" => "<computed>"
    public_ip:         "54.145.139.201" => "<computed>"
    security_groups.#: "1" => "1" (forces new resource)
    security_groups.0: "default" => "allow_all" (forces new resource)
    subnet_id:         "" => "<computed>"
    tenancy:           "default" => "<computed>"

-/+ aws_instance.web.1
    ami:               "ami-408c7f28" => "ami-32690e5a" (forces new resource)
    availability_zone: "us-east-1a" => "<computed>"
    instance_type:     "t1.micro" => "t1.micro"
    key_name:          "" => "<computed>"
    private_dns:       "ip-10-230-40-189.ec2.internal" => "<computed>"
    private_ip:        "10.230.40.189" => "<computed>"
    public_dns:        "ec2-54-167-144-148.compute-1.amazonaws.com" => "<computed>"
    public_ip:         "54.167.144.148" => "<computed>"
    security_groups.#: "1" => "1" (forces new resource)
    security_groups.0: "default" => "allow_all" (forces new resource)
    subnet_id:         "" => "<computed>"
    tenancy:           "default" => "<computed>"

+ aws_security_group.allow_all
    description:                 "" => "Allow all inbound traffic"
    ingress.#:                   "" => "1"
    ingress.0.cidr_blocks.#:     "" => "1"
    ingress.0.cidr_blocks.0:     "" => "0.0.0.0/0"
    ingress.0.from_port:         "" => "0"
    ingress.0.protocol:          "" => "tcp"
    ingress.0.security_groups.#: "" => "0"
    ingress.0.self:              "" => "0"
    ingress.0.to_port:           "" => "65535"
    name:                        "" => "allow_all"
    owner_id:                    "" => "<computed>"
    vpc_id:                      "" => "<computed>"

@sarahhodne
Copy link
Contributor Author

A second round of terraform plan/terraform apply got everything back to working, btw:

λ terraform plan
Refreshing Terraform state prior to plan...

atlas_artifact.web: Refreshing state... (ID: us-east-1:ami-32690e5a)
aws_security_group.allow_all: Refreshing state... (ID: sg-eea48684)
aws_instance.web.1: Refreshing state... (ID: i-e6e1fc07)
aws_instance.web.0: Refreshing state... (ID: i-58e1fcb9)
aws_elb.web: Refreshing state... (ID: atlas-example-elb)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

~ aws_elb.web
    instances.#: "0" => "2"
    instances.0: "" => "i-e6e1fc07"
    instances.1: "" => "i-58e1fcb9"


λ terraform apply
aws_security_group.allow_all: Refreshing state... (ID: sg-eea48684)
atlas_artifact.web: Refreshing state... (ID: us-east-1:ami-32690e5a)
aws_instance.web.1: Refreshing state... (ID: i-e6e1fc07)
aws_instance.web.0: Refreshing state... (ID: i-58e1fcb9)
aws_elb.web: Refreshing state... (ID: atlas-example-elb)
aws_elb.web: Modifying...
  instances.#: "0" => "2"
  instances.0: "" => "i-e6e1fc07"
  instances.1: "" => "i-58e1fcb9"
aws_elb.web: Modifications complete

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

@lukaszx0
Copy link

I've got the same issue. Been following Atlas getting started guide and have problems when I bump up instances count from 1 to 2.

@svanharmelen
Copy link
Contributor

@strzalek @henrikhodne any change you could try this again with the latest code from master? PR #661 should fix this issue.

@lukaszx0
Copy link

@svanharmelen I've just downloaded source code and compiled dev build and it's not solved. You should be able to easily reproduce it by using my tf config that I've posted above.

Let me know if I can help anyhow by providing more info.

@svanharmelen
Copy link
Contributor

@strzalek just tried and I can indeed reproduce... Please see PR #676 for a possible fix for this...

@svanharmelen
Copy link
Contributor

It turned out to be a little extensive, but PR's #661, #676, #680 and #681 together fix this issue. So closing this one now...

@lukaszx0
Copy link

Awesome! 👍 Thanks for looking into this and fixing. I'll play with
master on the weekend.

On Tuesday, December 16, 2014, Sander van Harmelen notifications@github.com
wrote:

Closed #639 #639.


Reply to this email directly or view it on GitHub
#639 (comment).

@ghost
Copy link

ghost commented May 4, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 4, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants