Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Template file outside the module don't get saved to the state file #3732

Closed
ernoaapa opened this issue Nov 3, 2015 · 13 comments
Closed

Template file outside the module don't get saved to the state file #3732

ernoaapa opened this issue Nov 3, 2015 · 13 comments

Comments

@ernoaapa
Copy link

ernoaapa commented Nov 3, 2015

Hi,
I have following module:

variable "ami" {}

variable "subnet_id" {}

resource "template_file" "server-userdata" {
  count = "2"
  filename = "${path.module}/../../global/files/user-data.tftemplate"
  vars {
    vault_pass = "1234"
    hostname = "temporary-${count.index + 1}"
    environment_domain = "temporary"
  }
}

resource "aws_instance" "server" {
  count = "2"

  ami = "${var.ami}"
  subnet_id = "${var.subnet_id}"
  instance_type = "t2.small"

  user_data = "${element(template_file.server-userdata.*.rendered, count.index)}"
}

When I use that module, in every run, it will recreate the servers because it creates the template-file.server-userdata again. Somehow the template file doesn't get saved to the state file.
Also see that the user_data is in both servers same.

-/+ module.temporary-cluster.aws_instance.server.0
    ami:                      "ami-91d709e2" => "ami-91d709e2"
    availability_zone:        "eu-west-1c" => "<computed>"
    ebs_block_device.#:       "0" => "<computed>"
    ephemeral_block_device.#: "0" => "<computed>"
    instance_type:            "t2.small" => "t2.small"
    key_name:                 "" => "<computed>"
    placement_group:          "" => "<computed>"
    private_dns:              "ip-10-0-0-188.eu-west-1.compute.internal" => "<computed>"
    private_ip:               "10.0.0.188" => "<computed>"
    public_dns:               "" => "<computed>"
    public_ip:                "" => "<computed>"
    root_block_device.#:      "1" => "<computed>"
    security_groups.#:        "0" => "<computed>"
    source_dest_check:        "true" => "1"
    subnet_id:                "subnet-fa13858d" => "subnet-fa13858d"
    tenancy:                  "default" => "<computed>"
    user_data:                "d505e8d09e0cafa8cd4654e96dfb7ceeb6147e01" => "e6d4858090abe3aad078951b2a0afce70fbc7566" (forces new resource)
    vpc_security_group_ids.#: "1" => "<computed>"

-/+ module.temporary-cluster.aws_instance.server.1
    ami:                      "ami-91d709e2" => "ami-91d709e2"
    availability_zone:        "eu-west-1c" => "<computed>"
    ebs_block_device.#:       "0" => "<computed>"
    ephemeral_block_device.#: "0" => "<computed>"
    instance_type:            "t2.small" => "t2.small"
    key_name:                 "" => "<computed>"
    placement_group:          "" => "<computed>"
    private_dns:              "ip-10-0-0-240.eu-west-1.compute.internal" => "<computed>"
    private_ip:               "10.0.0.240" => "<computed>"
    public_dns:               "" => "<computed>"
    public_ip:                "" => "<computed>"
    root_block_device.#:      "1" => "<computed>"
    security_groups.#:        "0" => "<computed>"
    source_dest_check:        "true" => "1"
    subnet_id:                "subnet-fa13858d" => "subnet-fa13858d"
    tenancy:                  "default" => "<computed>"
    user_data:                "4a7f10c28ef7a338a7987a75e58e795de2e367c4" => "e6d4858090abe3aad078951b2a0afce70fbc7566" (forces new resource)
    vpc_security_group_ids.#: "1" => "<computed>"

+ module.temporary-cluster.template_file.server-userdata.0
    filename:                "" => ".terraform/modules/a0c7604f79a79e3529913e666d5a3ff3/user-data.tftemplate"
    rendered:                "" => "<computed>"
    vars.#:                  "" => "3"
    vars.environment_domain: "" => "temporary"
    vars.hostname:           "" => "temporary-1"
    vars.vault_pass:         "" => "1234"

+ module.temporary-cluster.template_file.server-userdata.1
    filename:                "" => ".terraform/modules/a0c7604f79a79e3529913e666d5a3ff3/user-data.tftemplate"
    rendered:                "" => "<computed>"
    vars.#:                  "" => "3"
    vars.environment_domain: "" => "temporary"
    vars.hostname:           "" => "temporary-2"
    vars.vault_pass:         "" => "1234"

Workaround

Surprisingly when I copy the template file to the module folder and change:

filename = "${path.module}/user-data.tftemplate"

then suddenly everything starts working!
After this fix, on first run I got:

* aws_instance.server.0: diffs didn't match during apply. This is a bug with Terraform and should be reported.
* aws_instance.server.1: diffs didn't match during apply. This is a bug with Terraform and should be reported.

But just run again and then it creates the machines.

@ptierno
Copy link

ptierno commented Dec 2, 2015

👍 on this. Good thing i didn't run an apply.

@nrcxcia
Copy link

nrcxcia commented Dec 2, 2015

I ran into this issue as well... +1

@erkolson
Copy link

I'm seeing a different issue in a similar context. Using Terraform v0.6.10.

I also have a count of template files that are used as the user_data attribute for a count of aws_instance resources. An aws_instance resource has the user_data attribute defined like this:

  user_data = "${element(template_file.redis_userdata.*.rendered, count.index)}"

Once I perform an apply, further applies do not destroy/recreate the instances, however, when I update the count variable to add more templates/instances, all of the instances get destroyed and recreated. Somewhat curious, during a terraform plan, all of the instances have the same hash listed for the user_data attribute. For example, if I increase the count from 4 to 5, the existing 4 servers will have something like this :

-/+ aws_instance.redis_master.0
   ami:     "ami-f2f6c498" => "ami-f2f6c498"
   associate_public_ip_address:      "false" => "0"
...
   user_data:      "c762b0542be5c0f6b10d2a0e364a95a3bbab56c9" => "1aadd7d6b9b5533164d3cd335b56b9f4c512d1b0" (forces new resource)

and the new resource will look something like this:

-/+ aws_instance.redis_master.4
   ami:     "ami-f2f6c498"
   associate_public_ip_address:      "false" => "0"
...
   user_data:     "1aadd7d6b9b5533164d3cd335b56b9f4c512d1b0"

It seems odd that all of the templates, which have different content, have the same hash value.

@phinze
Copy link
Contributor

phinze commented Feb 29, 2016

@erkolson Check out my update over on a related issue for details on the issue related to changing counts. The canonical thread for that issue is #3449.

This "template not saved to state" thing is very weird - will add it to my shortlist to look into soon! 👌

@erkolson
Copy link

Thanks @phinze

@elblivion
Copy link
Contributor

@phinze I'm getting a potentially related bug. In my case I see template_file resources saved to state, but when a colleague checks the config + state out from git on the same commit terraform plan (v0.6.12) wants to recreate all the template_file resources and modify all related resources. There is no difference in the parameters of the existing resource compared to the new one plan wants to make other than the template attribute - these differ because it seems Terraform keeps the absolute path to the template file, which is different on our two machines. Is this how it's supposed to be?

Thanks!

@phinze
Copy link
Contributor

phinze commented Mar 9, 2016

Hi @elblivion - are you using the filename attribute in your configs? One of the reasons we deprecated filename in favor of using the template attribute and lifting content from a file with file() is because of issues with paths across machines.

Looking into the original reported issue, it does seem that it all stems from usage of the deprecated filename attribute - hopefully just switching over to template = "${file(...)}" solves the issue for everybody!

@elblivion
Copy link
Contributor

Hi @phinze, we switched to template a few versions back:

Config:

template = "${path.module}/user_data.tpl"

State:

"template": "/Users/stanton/Code/cf-infra-stacks/aws-production-806120774687/us-east-1_staging/terraform/.terraform/modules/1837784e5a50bf4216423e336b1ee5a7/contentful_userdata/user_data.tpl",

We use modules for almost everything. We've worked around this for now through some nifty use of Docker to trick Terraform, but there ought to be a better solution.

@phinze
Copy link
Contributor

phinze commented Mar 10, 2016

Ah it looks like you switched to template but kept using a path - it seems this was an oversight in the backwards compatibility code, one that we'll remedy in the next major release. The template attribute is meant to hold only the contents of your template, not the path.

So the full usage would look like

template = "${file("${path.module}/user_data.tpl")}"

This will lift the contents of the template into the state and neutralize all cross-machine path issues.

Let me know if all this makes sense! I'll file a separate issue describing the BC oversight that needs to be fixed.

@elblivion
Copy link
Contributor

@phinze oops! fixed that now, many thanks for that! Required recreating a bunch of things, but luckily we're using git for fetching modules and we had pinned the indispensable resources to a particular module version anyway. Thanks!

phinze added a commit that referenced this issue Mar 10, 2016
Turns out the BC code allowed users to move from `filename` to
`template` to squash the warning without having to switch from template
paths to template contents.

Here we warn when `template` is specified as a path so we can remove the
functionality in the future and remove this source of confusion.

refs #3732
@kblcuk
Copy link

kblcuk commented Jul 1, 2016

@phinze not sure if I should create a separate issue, maybe I'll ask here first.

I tried to update our current configuration to use template instead of filename, but when making a plan, terraform wants to recreate the resource because the template_file id changes (? I think).

From the look at the rendered output of the template, it seems to be the same as with the filename before, so I would expect it not to touch anything, just update the state file.
Change I made is from filename = "../global/files/user-data.tftemplate to template = "${file("../global/files/user-data.tftemplate")}"

Thanks in advance!

@mitchellh
Copy link
Contributor

Hey there. It has been a long time since there has been activity on this issue (our fault, sorry!), and the last known repro was against a very old version of Terraform. It is possible this issue still exists but we've addressed dozens of issues with the same type of error message in the past year. Could you please try again on the latest version and let us know if it doesn't work? The easiest way would be to simply open a new issue. Thanks!

@kblcuk I'm not sure I understand your Q but if its still an issue please open a new one!

@ghost
Copy link

ghost commented Apr 20, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 20, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants