-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify remote config with -state flag #2824
Comments
+1 to this for the same reason: multiple environments managed by the same configs. We also considered writing a wrapper to do something like this. I saw this before, and it looked promising: https://github.com/mozilla/socorro-infra/blob/master/terraform/wrapper.sh |
ooh, thanks for the heads up, @mlrobinson! |
Here's what I whipped up: https://gist.github.com/nathanielks/5bd4de708e831bbc170f, albeit for S3 only atm. |
@mlrobinson my script has been updated and improved since I posted it, might want to give it a looksee. |
I've been dealing with this issue a bit differently. When I have a terraform config that actually manages multiple different states, I check out the code for that root module and then use
Since the state is saved remotely, I then just delete the instance directory. If I need to make modifications, I just repeat with the same remote config to seed my copy with the correct remote state. I've been considering wrapping a script around this but so far I've not got around to it. I expect I'll write one as I continue to create more and more distinct instances of this config (I'm using it to create development environments for a bunch of different developers, so this set will grow) but for now I just wanted to share that in case someone else is inspired to wrap a helper script around it. |
@apparentlymart interesting strategy. You could even write it to |
Very interesting. Any downside to not caching the remote state locally, and having to download it whenever you swap deployments? Other than having to save the s3 configs for each deployment separately to restore with, and the brief wait to download things, I'm not thinking there is one. You can delete it when done with a set of operations, or let it persist for sometime as a history of what happened. I think I'm liking this approach. |
@nathanielks I have indeed been creating them in I expect if I wrote a wrapper script around it I'd make a random dir under |
@apparentlymart I'm smelling what you're stepping in |
Using the idea proposed by @apparentlymart, I've been using the following script: #!/bin/sh
readonly env=$1
readonly selfPath=$(cd $(dirname $0); pwd -P)
readonly origPath=$(pwd)
readonly tmpPath="/tmp/$env-$(date +%s)"
[ "$1" != "staging" ] && [ "$1" != "production" ] && {
echo "Unknown environment: \"$1\", must be one of \"production\" or \"staging\"."
exit 1
}
mkdir -p $tmpPath
cd $tmpPath
terraform init -backend=atlas -backend-config="name=blendle/k8s-$env" $selfPath/../terraform
echo "atlas { name = \"blendle/k8s-$env\" }" > atlas.tf
terraform remote pull
if [ -n "$SHELL" ]; then eval $SHELL; else bash; fi
cd $origPath
rm -r $tmpPath $ bin/tf production
# Initialized blank state with remote state enabled!
# Local and remote state in sync
$ terraform plan
# Refreshing Terraform state prior to plan...
#
# google_container_cluster.blendle: Refreshing state... (ID: blendle)
#
# No changes. Infrastructure is up-to-date. This means that Terraform
# could not detect any differences between your configuration and
# the real physical resources that exist. As a result, Terraform
# doesn't need to do anything.
$ exit it's not perfect, but it works. Still though, native environment integration in Terraform would be nice! |
+1 - There's no reason why terraform couldn't support named remote config cache files. By forcing all remote state cache files to be .terraform/terraform.tfstat, we are effectively prohibited from managing multiple identical environments with the same code simply because we want to store that state remotely and share it with others. If we choose to keep that state local it's a simple matter of specifying -state=file and -var-file=file and suddenly we can manage any number of mostly identical, but separate environments without a problem. Something as simple as adding an option like -remote-config-cache-file= should be all that is required to make remote state work in a similar manner to local state. |
When the When a Managing multiple environments with |
@mmell - I agree 1000% :) If it's of any help, here's how I'm currently managing multiple environments in AWS all with remote state. I wrote a python wrapper around terraform to manage symlinks to the "correct" remote state cache file. Disclaimer: this is not the latest version of my working code, but it should be good enough to give you the basic idea. https://github.com/pll/terraform_infrastructure/tree/master/modules |
I independently started out with terraform a few weeks ago, and apparently got to the same results: I find myself having to set state and var-file to the current environment in a wrapper script:
For the remote config on s3 I have:
It would seem that single definitions, multiple instances with independent states and variables is a valid that boils do to setting a few paths. I could be elegantly supported if terraform were to read these kind of settings from the environment directly. If so people could do away with wrappers entirely. PS I would consider what @mmell said - |
@soulrebel - Wouldn't that be nice? There are so many examples of low-hanging fruit that could eliminate the need for wrappers entirely. The two biggest ones being:
|
+1 |
I just have this in my provider.tf As described here : https://www.terraform.io/docs/providers/aws/index.html in a section called - "Shared Credentials file" |
+1 This is what I expected the -state to do. Strange that it doesn't. |
I am having the same issue. I want to be able to run multiple TF plans/applies simultaneously, and without having the '-state' outputting the state to a unique name, it's not possible without doing something nasty. |
To work around this issue, we did something "nasty" as noted above. Creating symlinks to state files... Something like this....
|
@octalthorpe running |
Related to #1295, I'd like to be able to manage multiple environments from one configuration. I know I can specify applicable var and state files locally, but I'm not sure it's possible to specify a specific remote config. I'd imagine the idea would be pull the desired remote config down, then run any actions on that state file with the var files for the environment I'd like to use. I'm thinking of writing a wrapper around terraform that would allow me to specify an environment that would pull the desired environment before running any actions on it. Sound about right?
The text was updated successfully, but these errors were encountered: