Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple resources for resource "null_resource" #4329

Closed
saleh-sedighi-ck opened this issue Dec 15, 2015 · 7 comments
Closed

Multiple resources for resource "null_resource" #4329

saleh-sedighi-ck opened this issue Dec 15, 2015 · 7 comments
Labels

Comments

@saleh-sedighi-ck
Copy link

Is it possible to run a "remote-exec" on multiple resources
-1 master
-X slaves.

after master and all slaves are up -> run some command on all nodes

@saleh-sedighi-ck
Copy link
Author

resource "null_resource" "slave-bootstrap" {
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.role-ABC.*.id)}"
  }

  connection {
    host = “???????????????????????”
    type = "ssh"
    user = "${var.ami_user}"
    key_file = "${var.initial_ssh_private_key}"
    agent = true
  }

  provisioner "remote-exec" {
    inline = [
      “MY COMMAND”
    ]
  }
}

@jen20 jen20 added the question label Dec 15, 2015
@jen20
Copy link
Contributor

jen20 commented Dec 15, 2015

Hi @salehsedighi-ck! You could probably hack depends_on around enough to do this - it might be that this isn't the optimal way though. Can you explain a bit more about what you're trying to achieve?

@saleh-sedighi-ck
Copy link
Author

Hi @jen20 thanks for looking into the issue .
lets assume following scenario:

we need to provision Role1(1 server) and Role2 (many servers)

when ALL Role2s are ready run bunch of scripts on Role1(pass role2 attributes)
when Role1 is ready run bunch of scripts on Role2 with inputs of Role1 attributes

@phinze
Copy link
Contributor

phinze commented Dec 16, 2015

@salehsedighi-ck I've achieved this in the past by modelling it with provisioners that can recognize if they are on a master or slave and bail early.

So, for example, if you need master bootstrap to happen first, then slave bootstrap, then a full cluster post-bootstrap, you could do something like this:

# This runs right after all the cluster instances are up and running
resource "null_resource" "master-bootstrap" {
  count = "${length(aws_instance.cluster.*.id)}"
  connection {
    host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
    # ...
  }
  provisioner "remote-exec" {
    inline = <<SCRIPT
#!/bin/bash
if ! /usr/local/bin/is-master; then
  echo "Not master"
  exit 0
fi
echo "Bootstrapping master"
/usr/local/bin/master-bootstrap
SCRIPT
  }
}

# This runs after master bootstrap finishes
resource "null_resource" "slave-bootstrap" {
  depends_on = ["null_resource.master-bootstrap"]
  count = "${length(aws_instance.cluster.*.id)}"
  connection {
    host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
    // ...
  }
  provisioner "remote-exec" {
    inline = <<SCRIPT
#!/bin/bash
if /usr/local/bin/is-master; then
  echo "Not slave"
  exit 0
fi
echo "Bootstrapping slave"
/usr/local/bin/slave-bootstrap
SCRIPT
  }
}

# This runs after slave bootstrap on all nodes
resource "null_resource" "cluster-post-bootstrap" {
  depends_on = ["null_resource.slave-bootstrap"]
  count = "${length(aws_instance.cluster.*.id)}"
  connection {
    host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
    // ...
  }
  provisioner "remote-exec" {
    inline = ["/usr/local/bin/cluster-post-bootstrap"]
  }
}

As for passing data back and forth between the provisioning scripts - you'd need to use something external to Terraform like Consul K/V - have nodes write/read their metadata in the proper place as part of their bootstrapping scripts.

@phinze
Copy link
Contributor

phinze commented Dec 21, 2015

I believe the above recommendation should work (at least it has worked for us!), so I'm going to close this issue for now. Feel free to follow up if you'd like to discuss further though. 😀

@phinze phinze closed this as completed Dec 21, 2015
@saleh-sedighi-ck
Copy link
Author

worked, thanks

@ghost
Copy link

ghost commented Apr 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants