-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple resources for resource "null_resource" #4329
Comments
|
Hi @salehsedighi-ck! You could probably hack |
Hi @jen20 thanks for looking into the issue . we need to provision Role1(1 server) and Role2 (many servers) when ALL Role2s are ready run bunch of scripts on Role1(pass role2 attributes) |
@salehsedighi-ck I've achieved this in the past by modelling it with provisioners that can recognize if they are on a master or slave and bail early. So, for example, if you need master bootstrap to happen first, then slave bootstrap, then a full cluster post-bootstrap, you could do something like this: # This runs right after all the cluster instances are up and running
resource "null_resource" "master-bootstrap" {
count = "${length(aws_instance.cluster.*.id)}"
connection {
host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
# ...
}
provisioner "remote-exec" {
inline = <<SCRIPT
#!/bin/bash
if ! /usr/local/bin/is-master; then
echo "Not master"
exit 0
fi
echo "Bootstrapping master"
/usr/local/bin/master-bootstrap
SCRIPT
}
}
# This runs after master bootstrap finishes
resource "null_resource" "slave-bootstrap" {
depends_on = ["null_resource.master-bootstrap"]
count = "${length(aws_instance.cluster.*.id)}"
connection {
host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
// ...
}
provisioner "remote-exec" {
inline = <<SCRIPT
#!/bin/bash
if /usr/local/bin/is-master; then
echo "Not slave"
exit 0
fi
echo "Bootstrapping slave"
/usr/local/bin/slave-bootstrap
SCRIPT
}
}
# This runs after slave bootstrap on all nodes
resource "null_resource" "cluster-post-bootstrap" {
depends_on = ["null_resource.slave-bootstrap"]
count = "${length(aws_instance.cluster.*.id)}"
connection {
host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
// ...
}
provisioner "remote-exec" {
inline = ["/usr/local/bin/cluster-post-bootstrap"]
}
} As for passing data back and forth between the provisioning scripts - you'd need to use something external to Terraform like Consul K/V - have nodes write/read their metadata in the proper place as part of their bootstrapping scripts. |
I believe the above recommendation should work (at least it has worked for us!), so I'm going to close this issue for now. Feel free to follow up if you'd like to discuss further though. 😀 |
worked, thanks |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Is it possible to run a "remote-exec" on multiple resources
-1 master
-X slaves.
after master and all slaves are up -> run some command on all nodes
The text was updated successfully, but these errors were encountered: