Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

config: provisioner splat vars can only reference other resources #1016

Merged
merged 3 commits into from
Feb 23, 2015

Conversation

mitchellh
Copy link
Contributor

Fixes #868
Fixes #795

This adds validation so that connection info and provisioners that use splats (${resource.name.*.property}) can only reference other resources, not themselves.

@knuckolls
Copy link
Contributor

I think this breaks the ability for an aws_instance in a vpc to provision itself via the private_ip attribute. Specifically in the case wherein you use the count parameter on the resource. I've been temporarily moved into another role internally but i'll either make the time to test that this breaks that use case or i'll get someone else to test it.

@mitchellh
Copy link
Contributor Author

@knuckolls I don't think so, we still allow specific self-references, such as ${aws_instance.foo.0.private_ip}, just not splat references: {$aws_instance.foo.*.private_ip}

@knuckolls
Copy link
Contributor

I still think that breaks existing functionality unless there's a workaround i'm unaware of. Here's a paired down example from our tf codebase.

resource "aws_instance" "zookeeper_aws_instance" {
  ami = "${var.ami}"
  instance_type = "${var.zookeeper_aws_instance_type}"
  security_groups = ["${var.aws_security_group}"]
  subnet_id = "${var.aws_subnet_id}"
  key_name = "${var.aws_key_name}"

  count = "5"  // i just picked a magic number here. 

  connection {
    type = "ssh"
    host = "${element(aws_instance.zookeeper_aws_instance.*.private_ip, count.index)}"
    user = "ubuntu"
    key_file = "${var.key_file}"
    timeout = "30s"
  }

  provisioner "remote-exec" {
    inline = [
      // code that needs to run on each zookeeper node
    ]
  }
}

In this case, if the "${element(aws_instance.zookeeper_aws_instance.*.private_ip, count.index)}" is unavailable then how do I tell the provisioner the unique ip per iteration of the resource? This problem doesn't exist without use of the count parameter.

Is there another way to do this that i'm unaware of? Using count.index on a splat to gather information about the current parameters seems to be the only way to grab the correct ip for provisioning.

@mitchellh
Copy link
Contributor Author

@knuckolls Ah, I see. Yes, I mention in some issue somewhere that that never should've worked as it is. We instead want to just introduce a self variable lookup: #219.

@knuckolls
Copy link
Contributor

Yep, something like this would be much preferable.

 host = "${self.private_ip}"

There are probably quite a few instances of people using this trick if they use VPC and are required to use the private_ip to provision. Will the existing functionality be deprecated for a release or will this just be merged w/ release notes saying to upgrade to the new syntax? Either way works for me, since I'm aware of the problem and i'm commenting on the issue, but for those that may be caught unaware we'll need a nice upgrade path.

@mitchellh
Copy link
Contributor Author

@knuckolls We're planning on getting that in for 0.4, so, going to merge this. :)

mitchellh added a commit that referenced this pull request Feb 23, 2015
config: provisioner splat vars can only reference other resources
@mitchellh mitchellh merged commit 87ecf4f into master Feb 23, 2015
@mitchellh mitchellh deleted the b-splat-validate branch February 23, 2015 21:48
@mitchellh mitchellh restored the b-splat-validate branch February 23, 2015 21:48
@mitchellh mitchellh deleted the b-splat-validate branch February 23, 2015 21:48
@knuckolls
Copy link
Contributor

@mitchellh Thanks for the heads up.

@MerlinDMC
Copy link
Contributor

this one does break my "nice and easy" pushing of initial cassandra seeds which I did via a splat + join :-/

resource "google_compute_instance" "cassandra" {
  count = "${var.cassandra_count}"

  # ...

  provisioner "remote-exec" {
    inline = [
      "echo ${join(",",google_compute_instance.cassandra.*.network_interface.0.address)} > /tmp/cassandra-addrs",
    ]
  }
}

Is there any workaround for this or do I have to split out configuration for those into a 2. run via fabric/salt/ansible/bash+ssh?

@mitchellh
Copy link
Contributor Author

@MerlinDMC Sorry, but the unfortunate truth is that that probably never worked correctly anyways. The way this "worked" before was conveniently ignoring/missing 1+ instances in that splat output. So the issue was it appeared to work, until it... well, didn't.

We're hoping to address your specific use case in 0.6

@knuckolls
Copy link
Contributor

Just to +1 that it probably didn't work as consistently as you expected it do. I uncovered the case wherein using a splat just like that within the provisioner would return inconsistent results due to the parallelism of the creation of the google_compute_instance.cassandra nodes (in your example).

One thing you could do is to move your provisioner to a null_resource, so it runs on each node at a later step in the execution plan.

Here's a slimmed down example from our codebase.

resource "null_resource" "grafana_provisioner" {
  count = "${var.grafana_instance_count}"

  connection {
    type = "ssh"
    host = "${element(aws_instance.grafana.*.private_ip, count.index)}"
    timeout = "30s"
  }

  provisioner "remote-exec" {
    inline = [
      "<your code here>",
    ]
  }
}

The only trick is that if you roll instances, you'll need to taint this resource. Furthermore, you may need to think it through and make your remote-exec idempotent if necessary. I personally think babushka is a good tool for that job but bash or whatever else will work just as well.

@josharian
Copy link
Contributor

We're hoping to address your specific use case in 0.6

@mitchellh we have a similar use case. Can you share the approach that you had in mind? I might be able to help with implementation.

@ghost
Copy link

ghost commented May 3, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 3, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
4 participants