Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Templates not working with latest master branch #2495

Closed
mikeyhill opened this issue Jun 25, 2015 · 14 comments · Fixed by #2527
Closed

Templates not working with latest master branch #2495

mikeyhill opened this issue Jun 25, 2015 · 14 comments · Fixed by #2527

Comments

@mikeyhill
Copy link

Hey, I just updated TF to pull in some changes and noticed that all of our templates stopped working. If I move back to a previous release all is well. It's been quite a while (a week?) since I updated TF so I don't have any info about which commit this stops working at but for now have reverted back.

  * Error configuring template: connection is shut down
  * Error configuring template: connection is shut down
  * Error configuring aws: connection is shut down

Occasionally I'll get multiple lines the same as above. Is this an issue with TF or have the templates changed? I don't see any differences in the documentation.

version: Terraform v0.6.0-dev (c1bb9116ad1899209bc87eae6122b478aee4d839)

@phinze
Copy link
Contributor

phinze commented Jun 25, 2015

Thanks for the report. It could be related to #2406 - will take a look.

@mitchellh
Copy link
Contributor

Can you show us your configuration? Or a debug log? That'll help debug this.

@mikeyhill
Copy link
Author

Yes, was just getting that - the debug log is so large it's crashing my browser on gists so looking for another way to post it.

@mikeyhill
Copy link
Author

@mikeyhill
Copy link
Author

    resource "aws_elb" "admin-elb" {

      name            = "${var.env}-admin-elb"
      subnets         = ["${var.admin_subnet_1_id}","${var.admin_subnet_2_id}"]
      security_groups = ["${aws_security_group.admin-elb-security.id}"]
      internal        = false

      listener {
        instance_port     = 80
        instance_protocol = "http"
        lb_port           = 80
        lb_protocol       = "http"
      }

      listener {
        instance_port     = 443
        instance_protocol = "http"
        lb_port           = 443
        lb_protocol       = "http"
      }

      health_check {
        healthy_threshold   = 2
        unhealthy_threshold = 5
        timeout             = 3
        target              = "HTTP:80/index.php"
        interval            = 20
      }

      cross_zone_load_balancing   = true
      idle_timeout                = 400
      connection_draining         = true
      connection_draining_timeout = 400

    }

    resource "aws_lb_cookie_stickiness_policy" "admin-sticky-cookie" {

          name          = "${var.env}-admin-sticky-cookie"
          load_balancer = "${aws_elb.admin-elb.id}"
          lb_port       = 80
          cookie_expiration_period = 600

    }

    resource "template_file" "admin-cloud-init" {
        filename = "cloud-config/elb-init.yml"
        vars {
            recipe          = "server-admin::runtime"
            chef_dns        = "${var.chef_dns}"
            chef_validator  = "${var.chef_validator}"
            chef_validator_key = "${file(\"validators/${var.env}-validator.pem\")}"
            datacenter      = "${var.dns_region}"
            env             = "${var.env}"
            elb             = "${aws_elb.admin-elb.name}"
            proxy_elb       = ""
        }
    }

    resource "aws_launch_configuration" "admin-launch-config" {

        name                    = "${var.env}-admin-launch-config"
        image_id                = "${var.ami}"
        instance_type           = "${var.admin_instance_type}"
        user_data               = "${template_file.admin-cloud-init.rendered}"
        iam_instance_profile    = "${aws_iam_instance_profile.admin-server-profile.name}"
        security_groups         = ["${aws_security_group.admin-security.id}"]
        key_name                = "${var.provision_key}"

        associate_public_ip_address = false

        root_block_device = {
          volume_type = "gp2"
          volume_size = "100"
          delete_on_termination = true
        }
        ephemeral_block_device = {
              device_name = "/dev/sdb"
              virtual_name = "ephemeral0"
        }
        ephemeral_block_device = {
              device_name = "/dev/sdc"
              virtual_name = "ephemeral1"
        }
    }

    resource "aws_autoscaling_group" "admin-autoscaling-group" {

      availability_zones        = ["${var.az_1}","${var.az_2}"]
      vpc_zone_identifier       = ["${var.admin_subnet_1_id}","${var.admin_subnet_2_id}"]
      name                      = "${var.env}-admin-autoscaling-group"
      max_size                  = "${var.admin_max_size}"
      min_size                  = "${var.admin_min_size}"
      health_check_grace_period = 300
      load_balancers          = ["${aws_elb.admin-elb.name}"]
      health_check_type         = "EC2"
      desired_capacity          = "${var.admin_desired_capacity}"
      force_delete              = true
      launch_configuration      = "${aws_launch_configuration.admin-launch-config.name}"

      tag {
        key                 = "Name"
        value               = "${var.env}-admin-node"
        propagate_at_launch = true
      }
      tag {
        key                 = "Environment"
        value               = "${var.env}"
        propagate_at_launch = true
      }
      tag {
        key                 = "Role"
        value               = "admin"
        propagate_at_launch = true
      }
      tag {
        key                 = "Group"
        value               = "application"
        propagate_at_launch = true
      }
    }

@mikeyhill
Copy link
Author

following up on this, looks like the issue first appeared in 3815122 but not sure if this is intended or not.

@lamdor
Copy link
Contributor

lamdor commented Jun 26, 2015

I also ran into this problem when I upgraded to terraform master this morning. I was able to do a git bisect on terraform against our configuration and found that 0b1dbf3 was the commit that broke it. We have 5 different providers in our configuration and it doesn't seem to be selective or predictable in which provider it has connection problems with. Sometimes it's multiple.

I've reverted to the commit before 0b1dbf3, and can verify that things work great as usual.

@phinze
Copy link
Contributor

phinze commented Jun 26, 2015

I'm working on reproing this now. The config from @mikeyhill has a bunch of dependencies I'm trying to meet - @rubbish if you happen to have a simpler repro case let me know.

@lamdor
Copy link
Contributor

lamdor commented Jun 26, 2015

@phinze So it's happening when I'm doing a targeted plan against a certain module. I'll gist up an example. One sec.

@lamdor
Copy link
Contributor

lamdor commented Jun 26, 2015

@phinze Here's an example

in main.tf:

provider "aws" {
  region = "us-east-1"
}

module "test" {
  source = "./module"
}

module "test2" {
  source = "./module"
}

and in module/main.tf:

resource "aws_route53_zone" "test" {
  name = "test"
}

When I run terraform plan --target=module.test, I sporadically get:

$ terraform plan --target=module.test
Error configuring: 1 error(s) occurred:

* Error configuring aws: connection is shut down

But sometimes it works.

Sometimes it comes back with:

$ terraform plan --target=module.test
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * module.test2.provider.aws: connection is shut down

@lamdor
Copy link
Contributor

lamdor commented Jun 26, 2015

Also, when doing a full terraform plan, I've not been able to get the connection errors.

@phinze
Copy link
Contributor

phinze commented Jun 26, 2015

@rubbish perfect thank you - on it now

phinze added a commit that referenced this issue Jun 26, 2015
When targeting prunes out all the resource nodes between a provider and
its close node, there was no dependency to ensure the close happened
after the configure. Needed to add an explicit dependency from the close
to the provider.

fixes #2495
@mikeyhill
Copy link
Author

@phinze - thanks! apologies on the useless config sample above, I'll keep that in mind that you need a working example rather than the actual configs - I'm still a bit green with this.

phinze added a commit that referenced this issue Jun 29, 2015
When targeting prunes out all the resource nodes between a provider and
its close node, there was no dependency to ensure the close happened
after the configure. Needed to add an explicit dependency from the close
to the provider.

This tweak highlighted the fact that CloseProviderTransformer needed to
happen after DisableProviderTransformer, since
DisableProviderTransformer inspects up-edges to decide what to disable,
and CloseProviderTransformer adds an up-edge.

fixes #2495
phinze added a commit that referenced this issue Jun 29, 2015
Not sure if this test has value /cc @mitchellh (who requested one be
added) to see what I might be missing here.

refs #2495
@ghost
Copy link

ghost commented May 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants