Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraforming multiple AWS Regions #870

Closed
ctro opened this issue Jan 27, 2015 · 7 comments
Closed

Terraforming multiple AWS Regions #870

ctro opened this issue Jan 27, 2015 · 7 comments
Labels

Comments

@ctro
Copy link

ctro commented Jan 27, 2015

I'm opening this issue to get some feedback

provider "aws" requires region=.
My topology is identical across regions.
I would like to only express the config for 1 region in my terraform scripts.

I would like to change "-var region=us-east-1" to "-var region=us-west-1" to target different regions, one at a time.
This seems better than including all regions' (80 some instances) config, split out and duplicated by region.

With simple TF scripts changing the region variable confuses terraform because its terraform.tfstate file only knows resources from the "other"(i.e. the first terraformed) region.

I can't figure out how to do this correctly.

So, is there some recommended way to do this that I missed?
Thanks for any insight.

@pmoust
Copy link
Contributor

pmoust commented Jan 27, 2015

In your scenario, if you don't have shared resources depending on one
another across regions, the simplest way would be to just specify state
filepath per region.
Στις 27 Ιαν 2015 9:28 μ.μ., ο χρήστης "Clint Troxel" <
notifications@github.com> έγραψε:

I'm opening this issue to get some feedback

provider "aws" requires region=.
My topology is identical across regions.
I would like to only express the config for 1 region in my terraform
scripts.

I would like to change "-var region=us-east-1" to "-var region=us-west-1"
to target different regions, one at a time.
This seems better than including all regions' (80 some instances)
config, split out and duplicated by region.

With simple TF scripts changing the region variable confuses terraform
because its terraform.tfstate file only knows resources from the
"other"(i.e. the first terraformed) region.

I can't figure out how to do this correctly.

So, is there some recommended way to do this that I missed?
Thanks for any insight.


Reply to this email directly or view it on GitHub
#870.

@ctro
Copy link
Author

ctro commented Jan 27, 2015

Yes! This is the simple switch that I couldn't find. Thank you @pmoust.
Specificially:
-state=path allows me to save one state file per region.

https://www.terraform.io/docs/commands/apply.html

@ctro ctro closed this as completed Jan 27, 2015
@farridav
Copy link

I realise this issue is closed, but its the only forum I've found to discuss this use case, can one use multiple state files with remote state ? I'm also considering turning my code into a module and executing it once per region within one terraform script.. This keeps all my architecture as code, and means other collaborators don't need to remember my command line arguments in order to change the respective regions architecture.. Does this make sense to anyone else ? Also if there's a better place to discuss this let me know.. Thanks

@endofcake
Copy link
Contributor

@farridav, it is possible, although in a bit hacky way. What we ended up doing is we have a wrapper script, which accepts account_name and some other parameters. We store TF state files under different keyprefixes in S3 (it goes something like /config/test/sandbox1, /config/prod/sandbox42, etc). The wrapper script makes sure that whenever we do a deploy, we first nuke the cached local state (this is really important!), and then initialise remote state with the proper keyprefix and pull it from the S3 bucket. After the deploy goes through, the state file is synchronised back to the S3 bucket.

It works reasonably well, although I feel that at this point it becomes a poor man's substitute for CloudFormation.

@farridav
Copy link

farridav commented Apr 8, 2016

I ended up doing exactly the same thing, wrapped terraform, and used multiple statefiles, it goes something like this:

  • terraform_wrapper plan
    • mkdir && cd into new tmpdir
    • Sets up remote state using s3://statebucket/.tfstate
    • Fetch remote state
    • Copy files directory to tmpdir
    • Template out terraform files using Jinja2
    • Execute terraform command
    • Cleanup tmpdir

I had battled for a little while trying to bend terraforms use of modules, and interpolation in region blocks, but it became a bit too messy for my liking, in the end I filled the holes with dynamic remote state, and and jinja2 templating..

It would be awesome if these kind of things were possible in native terraform, though it may be beyond what the tool was designed for

@endofcake
Copy link
Contributor

@farridav , what do you use Jinja for? Is for dynamically substituting some values in the templates?
We use multiple tfvars files for region specific config (VPC ids, subnets etc), and a separate config.tf file for shared config (account ids or instance size).
So, for example, we put

provider "aws" {
  region = "${var.deploy-region}"
}

into awsProvider.tf.
We also put

variable "availability-zones" {
  type = "map"
  default = {}
}

into config.tf. It's a variable with no defaults. To make plan and apply work, we pass -var-file=$regionConfig to the command, pointing it it us-west-2.tfvars, for example. Inside this file, we have a mapping:

deploy-region = "us-west-2"

availability-zones.test = "us-west-2a us-west-2b us-west-2c"
availability-zones.uat  = "us-west-2a us-west-2b us-west-2c"
availability-zones.prod = "us-west-2a us-west-2b us-west-2c"

When we need to do a deploy of the same unit of infrastructure to a different region, we just use another tfvars file.

@ghost
Copy link

ghost commented Apr 26, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 26, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants