Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include sample "best practices Terragrunt configuration" #169

Closed
josh-padnick opened this issue Apr 21, 2017 · 26 comments
Closed

Include sample "best practices Terragrunt configuration" #169

josh-padnick opened this issue Apr 21, 2017 · 26 comments
Labels
enhancement New feature or request

Comments

@josh-padnick
Copy link
Contributor

Even I find myself glazing over a little by the bottom of the README because there are a lot of features here. Also, after reading everything, the Terragrunt user still has to make one more leap on their own: What is the ideal, complete setup with Terragrunt that combines all these use cases?

For that reason, it'd be nice if we had a _sample folder with maybe live and modules subfolders which mimicked a real setup and showed exactly what the root terraform.tfvars file and each "child" terraform.tfvars file looks like.

Or alternatively, just include those two terraform.tfvars files (root and child) as examples.

@tomstockton
Copy link

Hello,

I expect that this is probably low priority but I'd appreciate some rough guidance around how to organise / write the terraform.tfvars files, specifically on creating DRY remote state configuration. The current README.md suggests:

  • Creating parent and child terraform.tfvars files in the modules repo/folder which defines remote state config.
  • Creating a terraform.tfvars file in the live repo/folder for each component in each environment ie. prod/app/terraform.tfvars which defines the source (and any overriding vars) of the appropriate module.

I'm finding that this fails to configure the remote state as after the initial init to download the source repo into a tmp dir, terragrunt then copies over the working dir (of the live repo) into the tmp dir and clobbers the terraform.tfvars file (that includes the remote state config).

I've tried putting the remote state (parent and child) terraform.tfvars files into the live repo and this works as I would expect so I'm wondering if the current README is incorrect?

Any advice much appreciated!

Thanks

Tom

@brikis98
Copy link
Member

brikis98 commented May 3, 2017

@tomstockton All the .tfvars files should be in the live repo. None of them should be in the modules repo.

@dpetzel
Copy link

dpetzel commented May 3, 2017

I've also been struggling with this. Conceptually I think I grok all the individual examples, but when i try to combine some strategies I keep hitting some issues.

Here is the combined structure I believe should be in place:

└── live
    ├── terraform.tfvars
    │
    ├── app1
    │   ├── dev
    │   │   ├── main.tf
    │   │   └── terraform.tfvars
    │   ├── qa
    │   │   ├── main.tf
    │   │   └── terraform.tfvars
    │   └── prod
    │       ├── main.tf
    │       └── terraform.tfvars
    └── app2
        ├── dev
        │   ├── main.tf
        │   └── terraform.tfvars
        ├── qa
        │   ├── main.tf
        │   └── terraform.tfvars
        └── prod
            ├── main.tf
            └── terraform.tfvars
	 
└── modules
    └── app
	    ├─ main.tf
	    ├─ random_others.tf
            └─ variables.tf

The main.tf in live needs to have a minimal configuration to enable remote state:

terraform {
  backend "s3" {}
}

The child terraform.tfvars is also very minimal:

terragrunt = {
  include {
    path = "${find_in_parent_folders()}"
  }
  terraform {
    source = "git::git@github.com:foo/modules.git//app?ref=v0.0.3"
  }
}
var1 =  value1
var2 = value2

The parent terraform.tfvars has the generic remote state configuration:

terragrunt = {
  remote_state {
    backend = "s3"
    config {
      bucket  = "my-terraform-state-${get_aws_account_id()}"
      key     = "${path_relative_to_include()}/terraform.tfstate"
      region  = "${get_env("TF_VAR_region", "us-east-1")}"
      encrypt = true
    }
  }
}

With this setup, and running a terraform plan I see it copies out the appropriate things to the tmp location, however the main in the live folder seems to overwrite the main from the module, resulting in no resources being observed. If I rename either main.tf to something else both files end up in the directory and the plan happens properly. This took a bit of trial and error to figure out, but having the combined use cases examples mentioned above might help here.

The next challenge I ran into with this was when trying to add some extra_arguments to my parent terraform.tfvars. Anything I set in the terraform key in the parent gets ignored. I believe this is already tracked here: #147.

So @josh-padnick said, I' still in that still has to make one more leap step of trying to get the right structure in place that plays nice with all the features I want to use. I'm confident I'll get there, but I just wanted to add some supporting feedback on this.

@brikis98
Copy link
Member

brikis98 commented May 4, 2017

Apologies for the confusion everyone. We'll definitely add more examples to the docs and perhaps sample code. For now, the most I have time for is to give one quick example layout that will hopefully help everyone understand what I had in mind. There are, of course, many ways to use Terraform and Terragrunt, but the idea behind remote Terraform configurations was that you define your Terraform code (.tf files) exactly once, in a repo called modules:

modules
├── frontend-app
│   ├── main.tf
│   ├── outputs.tf
│   └── vars.tf
├── mysql
│   ├── main.tf
│   ├── outputs.tf
│   └── vars.tf
└── vpc
    ├── main.tf
    ├── outputs.tf
    └── vars.tf

This contains 100% normal Terraform code to deploy a variety of "components": a single app, or a single MySQL DB, or a single VPC. The only requirements for this code are that each component:

  1. Exposes anything that varies from environment to environment as an input variable. For example, the frontend-app may expose variables for the size/number of servers to run:

    variable "instance_count" {
      description = "How many servers to run"
    }
    
    variable "instance_type" {
      description = "What kind of servers to run (e.g. t2.large)"
    }
  2. Enables remote state. Note that the configuration for the remote state will be handled by Terragrunt, so you just need to add a "stub" to tell Terraform you want to use remote state in the first place:

    terraform {
      # The configuration for this backend will be filled in by Terragrunt
      backend "s3" {}
    }

To actually deploy the code in modules across a variety of environments, you have a second repo called live that contains only .tfvars files to configure Terragrunt:

live
├── prod
│   ├── frontend-app
│   │   └── terraform.tfvars
│   ├── mysql
│   │   └── terraform.tfvars
│   └── vpc
│       └── terraform.tfvars
├── qa
│   ├── frontend-app
│   │   └── terraform.tfvars
│   ├── mysql
│   │   └── terraform.tfvars
│   └── vpc
│       └── terraform.tfvars
├── stage
│   ├── frontend-app
│   │   └── terraform.tfvars
│   ├── mysql
│   │   └── terraform.tfvars
│   └── vpc
│       └── terraform.tfvars
└── terraform.tfvars

The root .tfvars file sets up the basic settings for remote state:

terragrunt = {
  remote_state {
    backend = "s3"
    config {
      bucket     = "my-terraform-state"
      key        = "${path_relative_to_include()}/terraform.tfstate"
      region     = "us-east-1"
      encrypt    = true
      lock_table = "my-lock-table"
    }
  }
}

Each of the child .tfvars files includes the parent (to avoid copy/pasting the remote state settings), uses the source parameter to indicate which Terraform component to download, and fills in the environment-specific variables for that component:

terragrunt = {
  # Include all the settings from the root .tfvars file
  include {
    path = "${find_in_parent_folders()}"
  }

  # Apply the code in the frontend-app module
  terraform {
    source = "git::git@github.com:foo/modules.git//frontend-app?ref=v0.0.3"
  }
}

# Fill in the environment-specific variables for frontend-app
instance_count = 5
instance_type  = "t2.micro"

Hope that helps! Feel free to ask questions.

@dpetzel
Copy link

dpetzel commented May 4, 2017

@brikis98 No need for Apologies, Its totally understandable that as you reach more usage, new cases will arise needing clarification.

Thanks for the explanation above, I hadn't considered putting the s3 config stub into the modules. What you have laid out is helpful, however I've got a few more questions/scenarios which I think are fairly common. I'm not asking this because I haven't address them, but mostly to get the idea out in the open in case it helps drive what the sample layout looks like.

  1. How do you see multiple region configurations being laid out. There is the use case for having the stack you describe in two or more regions. In its simplest form, it sounds like its simply a region variable in the module, however I think the rub would be when you want to ensure the remote state for a component is stored in the same region as the component. IE if I spin up frontend-app in us-west-2, the bucket the state is in should be in that region as well. I think this gets pretty easy, by making the region name a parent of the env name.
  1. The use case where different environments are actually different accounts. I think is also fairly easily solved by using the account id helper in the bucket name. Considering this combined with item 1, I think the logical place to place the tfvars would be in the region/environment directory (basically pushing it down one level from your example above). Once Changes to modules are not pulled in automatically #20 is resolved, I think that place works well (from my limited trials)
  2. What do you consider some of the pros/cons of having modules as a separate repo, vs maybe having a single repo with two top level dirs (live and modules). I don't have any actual problems with the two repos, but given how tightly coupled the module being source is, to the configuration it might feel odd to some to have to bounce out to another repo. IE if I add some new variable, working between two repos might seem disjointed. Keeping them in the same repo, doesn't preclude anyone from still using the git:// source reference to pin to certain refs. Again, I don't have a strong opinion here, just some food for thought.
  3. Sometimes modules are generic enough that we can shared them with other teams in our organization. By putting the s3 backend stub into the module, it feels like we might be taking away the re-usability of the module by other teams who might want to use some other backend (or local). What are you're thoughts on adding a backend.tf alongside the tfvars in the live folders?

Thanks again for taking the time on this thread, please don't take these comments as complaints, just wanted to drop some of the thoughts I've had while working through getting my stuff organized the last couple of days.

@brikis98
Copy link
Member

brikis98 commented May 4, 2017

@dpetzel I'll answer your questions one at a time, as I wait for various builds to run :)

How do you see multiple region configurations being laid out. There is the use case for having the stack you describe in two or more regions. In its simplest form, it sounds like its simply a region variable in the module, however I think the rub would be when you want to ensure the remote state for a component is stored in the same region as the component.

The folder structure in the live repo is probably what you expect:

.
├── terraform.tfvars
├── eu-west-1
│   ├── prod
│   │   ├── frontend-app
│   │   │   └── terraform.tfvars
│   │   ├── mysql
│   │   │   └── terraform.tfvars
│   │   └── vpc
│   │       └── terraform.tfvars
│   ├── qa
│   │   ├── frontend-app
│   │   │   └── terraform.tfvars
│   │   ├── mysql
│   │   │   └── terraform.tfvars
│   │   └── vpc
│   │       └── terraform.tfvars
│   └── stage
│       ├── frontend-app
│       │   └── terraform.tfvars
│       ├── mysql
│       │   └── terraform.tfvars
│       └── vpc
│           └── terraform.tfvars
└── us-east-1
    ├── prod
    │   ├── frontend-app
    │   │   └── terraform.tfvars
    │   ├── mysql
    │   │   └── terraform.tfvars
    │   └── vpc
    │       └── terraform.tfvars
    ├── qa
    │   ├── frontend-app
    │   │   └── terraform.tfvars
    │   ├── mysql
    │   │   └── terraform.tfvars
    │   └── vpc
    │       └── terraform.tfvars
    └── stage
        ├── frontend-app
        │   └── terraform.tfvars
        ├── mysql
        │   └── terraform.tfvars
        └── vpc
            └── terraform.tfvars

In the modules repo, each module would take an additional input variable:

variable "region" {
  description = "The region to deploy to"
}

You can then use that throughout your Terraform code, including the provider configuration, and your terraform_remote_state data sources:

provider "aws" {
  region = "${var.region}"
}

# ... create resources ...

data "terraform_remote_state" "vpc" {
  backend = "s3" 
  config {
    region = "${var.region}"
    bucket = "my-bucket"
    key = "vpc/terraform.tfstate"
  }
}

If your state is stored in a different region than the one you're deploying to, then you can have two input variables, one called region and one called terraform_state_region.

@brikis98
Copy link
Member

brikis98 commented May 4, 2017

The use case where different environments are actually different accounts. I think is also fairly easily solved by using the account id helper in the bucket name. Considering this combined with item 1, I think the logical place to place the tfvars would be in the region/environment directory (basically pushing it down one level from your example above).

Yes, one root .tfvars file per account.

.
├── account-bar
│   ├── terraform.tfvars
│   ├── eu-west-1
│   │   ├── prod
│   │   │   ├── frontend-app
│   │   │   │   └── terraform.tfvars
│   │   │   ├── mysql
│   │   │   │   └── terraform.tfvars
│   │   │   └── vpc
│   │   │       └── terraform.tfvars
│   │   ├── qa
│   │   │   ├── frontend-app
│   │   │   │   └── terraform.tfvars
│   │   │   ├── mysql
│   │   │   │   └── terraform.tfvars
│   │   │   └── vpc
│   │   │       └── terraform.tfvars
│   │   └── stage
│   │       ├── frontend-app
│   │       │   └── terraform.tfvars
│   │       ├── mysql
│   │       │   └── terraform.tfvars
│   │       └── vpc
│   │           └── terraform.tfvars
│   └── us-east-1
│       ├── prod
│       │   ├── frontend-app
│       │   │   └── terraform.tfvars
│       │   ├── mysql
│       │   │   └── terraform.tfvars
│       │   └── vpc
│       │       └── terraform.tfvars
│       ├── qa
│       │   ├── frontend-app
│       │   │   └── terraform.tfvars
│       │   ├── mysql
│       │   │   └── terraform.tfvars
│       │   └── vpc
│       │       └── terraform.tfvars
│       └── stage
│           ├── frontend-app
│           │   └── terraform.tfvars
│           ├── mysql
│           │   └── terraform.tfvars
│           └── vpc
│               └── terraform.tfvars
└── account-foo
    ├── terraform.tfvars
    ├── eu-west-1
    │   ├── prod
    │   │   ├── frontend-app
    │   │   │   └── terraform.tfvars
    │   │   ├── mysql
    │   │   │   └── terraform.tfvars
    │   │   └── vpc
    │   │       └── terraform.tfvars
    │   ├── qa
    │   │   ├── frontend-app
    │   │   │   └── terraform.tfvars
    │   │   ├── mysql
    │   │   │   └── terraform.tfvars
    │   │   └── vpc
    │   │       └── terraform.tfvars
    │   └── stage
    │       ├── frontend-app
    │       │   └── terraform.tfvars
    │       ├── mysql
    │       │   └── terraform.tfvars
    │       └── vpc
    │           └── terraform.tfvars
    └── us-east-1
        ├── prod
        │   ├── frontend-app
        │   │   └── terraform.tfvars
        │   ├── mysql
        │   │   └── terraform.tfvars
        │   └── vpc
        │       └── terraform.tfvars
        ├── qa
        │   ├── frontend-app
        │   │   └── terraform.tfvars
        │   ├── mysql
        │   │   └── terraform.tfvars
        │   └── vpc
        │       └── terraform.tfvars
        └── stage
            ├── frontend-app
            │   └── terraform.tfvars
            ├── mysql
            │   └── terraform.tfvars
            └── vpc
                └── terraform.tfvars

@dpetzel
Copy link

dpetzel commented May 4, 2017

Sorry I don't think I clearly articulate the concern with regards to this one:

If your state is stored in a different region than the one you're deploying to, then you can have two input variables, one called region and one called terraform_state_region.

Totally understand when reading the remote state that is how it would be done, but what I'm suggesting is that you want to configure your remote state to stay in the same region as the resource your deploying. The example above hard codes region = "us-east-1". Now if I deploy this module twice (once to east and once to west), I think we end up with the state being only in one region, So it looks like you suggested moving the parent tfvars up when I think it needs to move down. Alternatively the region would need to be interpolated, with something like getenv, but I'd like to avoid the usage of getenv as its a bit error prone (in that someone might forget to set it to the right value) and could potentially mess up one regions remotes state.

@brikis98
Copy link
Member

brikis98 commented May 4, 2017

What do you consider some of the pros/cons of having modules as a separate repo, vs maybe having a single repo with two top level dirs (live and modules). I don't have any actual problems with the two repos, but given how tightly coupled the module being source is, to the configuration it might feel odd to some to have to bounce out to another repo. IE if I add some new variable, working between two repos might seem disjointed. Keeping them in the same repo, doesn't preclude anyone from still using the git:// source reference to pin to certain refs. Again, I don't have a strong opinion here, just some food for thought.

If everything is in one repo, then as soon as you update a module, that change will affect every single environment the next time you run apply. I prefer to be able to test my code in a staging environment before allowing the changes to affect prod, and the "always latest" approach with a single repo doesn't allow that.

By using multiple repos, you can create an immutable, versioned "artifact" for every change, and carefully promote that artifact through each of your environments (e.g. qa -> stage -> prod). It's easier to understand exactly what is deployed where and you never accidentally push code to prod before it's ready.

You could, of course, use tags to do versioning within a repo, but I found it to be a bit confusing to be relying on old versions of the live repo, while working in the live repo, especially with multiple levels of dependencies. And if you're using versioned URLs anyway, it's not clear what would be the advantage of keeping everything in one repo. To do quick, iterative development, you'd have to use the --terragrunt-source option in either case.

@brikis98
Copy link
Member

brikis98 commented May 4, 2017

Sometimes modules are generic enough that we can shared them with other teams in our organization. By putting the s3 backend stub into the module, it feels like we might be taking away the re-usability of the module by other teams who might want to use some other backend (or local). What are you're thoughts on adding a backend.tf alongside the tfvars in the live folders?

I think the issue here is one of terminology. I'm overloading the term module to mean two things:

  1. Completely generic infrastructure. For example, how to deploy a VPC or a Docker cluster. These are the types of modules described in Terraform's documentation that are meant to be used with the module keyword:

    module "vpc" {
      source = "/my-modules/vpc"
      # ...
    }

    These modules typically don't specify a provider, terraform_remote_state, or anything else that ties them to a particular deployment. Instead, all that data comes in through input variables. To share code with your team, this is the kind of module you want.

  2. A particular way to deploy your custom infrastructure. This is specific to Terragrunt and used by specifying a source parameter in your .tfvars file:

    terragrunt = {
      terraform {
        source = "..."
      }
    }

    We added support for these sorts of "modules" to Terragrunt to avoid having to copy and paste a lot of code when deploying multiple environments. Module probably isn't the right term. Perhaps we should call them components? Terraform is starting to add native support for something similar with their concept of state environments. However, it seems like state environments are something you do by using the terraform binary (e.g. by running terraform env set). However, I'm not satisfied with this solution, as I prefer all my environments to be captured in code, not in commands.

@brikis98
Copy link
Member

brikis98 commented May 4, 2017

Totally understand when reading the remote state that is how it would be done, but what I'm suggesting is that you want to configure your remote state to stay in the same region as the resource your deploying.

Ah, gotcha. This is a common question discussed in #132, #147 (comment), and elsewhere: how do you share variables between Terraform and Terragrunt? In your case, you want to be able to specify the region in one place, and for that region to be used in both Terragrunt's remote state configuration and your Terraform code.

The short answer is that we don't have a good solution yet. Perhaps #147 (comment) is the way to go. I'm open to ideas.

@dpetzel
Copy link

dpetzel commented May 4, 2017

Module probably isn't the right term

I've struggled with this as well. I've been using components myself internally, but even that I don't love. Its almost more of an implementation, but as they say..."Naming things is hard" :)

Regarding #147 (which I did bump into myself) Having some more sophisticated "deep merging" would certainly help. I might also be interesting if nested includes were possible, so given our earlier multiple region type setup we might have something like this:

# region1/env1/terraform.tfvars
terragrunt = {
  remote_state {
    backend = "s3"
    config {
      key        = "env1/${path_relative_to_include()}/terraform.tfstate"
    }
  }
}
# region1/terraform.tfvars
terragrunt = {
  remote_state {
    backend = "s3"
    config {
      bucket     = "my-terraform-state-region1"
      region     = "region1"
      encrypt    = true
      lock_table = "my-lock-table"
    }
  }
}

However, it seems like state environments are something you do by using the terraform binary (e.g. by running terraform env set). However, I'm not satisfied with this solution, as I prefer all my environments to be captured in code, not in commands.

I've also been underwhelmed here, especially in the context of multiple regions, and keeping state regionalized.

@josh-padnick
Copy link
Contributor Author

I'm glad we're codifying some of our official recommendations, even if just as a GitHub issue for now.

I did want to register one potential exception to the following recommendation:

To actually deploy the code in modules across a variety of environments, you have a second repo called live that contains only .tfvars files to configure Terragrunt.

On my vacation, I was playing around with the aws_dynamodb_table resource for use in a serverless app and found that this Terraform resource has so many inline resources that defining a module doesn't make sense unless hashicorp/terraform#14037 is resolved.

One option is to define a Terraform module for each possible permutation of aws_dynamodb_table, like one for an aws_dynamodb_table with no Global Secondary Index, one for two Local Secondary Indexes, etc. But this feels clumsy, so I'd prefer to just avoid using Terragrunt's source features for this use case.

Comments welcome on this use case.

@brikis98
Copy link
Member

brikis98 commented May 5, 2017

On my vacation, I was playing around with the aws_dynamodb_table resource for use in a serverless app and found that this Terraform resource has so many inline resources that defining a module doesn't make sense unless hashicorp/terraform#14037 is resolved.

Yes, a "module" would be very difficult to use with this resource.

However, a "component"--that is, a remote Terraform configuration downloaded by Terragrunt--would work just fine. You don't need to define every permutation of a component; just the very small number (usually 1) of permutations your team actually uses. So I stand by the argument that just about all Terraform code (.tf) should live in the modules repo and only .tfvars in live.

@dpetzel
Copy link

dpetzel commented May 5, 2017

I'd tend to agree with @brikis98 here. A "module" in this context, isn't something that takes all potential options, but rather an implementation. You know which options on that resource you want to deploy and you'd code this "implementation module" as such. Then it would have a few variables that will vary in between your environments IE you may want different read/write capacity between dev and prod but you probably wouldn't have an index block in one environment not another.

However for a different project/app you might not have a certain index block, in which case that is a second "implementation module", again exposing only the things you'd vary between environments.

The key concept (which took me a bit to a handle on) is that your not building generic multi-use type of modules, but rather modules that are implementing a very specific configuration, unique to your scenario.

@josh-padnick
Copy link
Contributor Author

The key concept (which took me a bit to a handle on) is that your not building generic multi-use type of modules, but rather modules that are implementing a very specific configuration, unique to your scenario.

Increasingly, I'm seeing this is as an essential concept. Thanks for sharing your perspective! I took @brikis98's advice and it was indeed quite easy to work with things.

@conorgil
Copy link
Contributor

I just caught up on this thread and wanted to throw in my 2 cents. I would also love to have some example code to get started with Terragrunt. I ran into some of the issues linked in this thread and having example code would have made it much easier to wrap my head around some of the Terragrunt features.

I'll be out of town this weekend, but I plan to start on a PR next week that contains tons of examples. I will rely heavily on this thread and also the examples in the README. I will try to have example code for every single Terragrunt keyword/command in the README.

@josh-padnick
Copy link
Contributor Author

@conorgil That would be wonderful. Thank you!

@dfcarpenter
Copy link

I have a question about remote state. After setting up my simple environment and reading around about setting up a vpc per env I came across this article which suggests maintaining a separate state per env. I noticed that per the docs and reaffirmed above (if using a "live" repo with dev, staging, prod, and a separate modules repo with the .tf files ) that we have a single tfstate for all the envs. If we want seprate buckets and a dynamo lock db per env would I just add a terraform.tfvars not at the root of the live folder but add one in each environment folder?

@josh-padnick
Copy link
Contributor Author

I noticed that ... we have a single tfstate for all the envs. If we want separate buckets and a dynamo lock db per env would I just add a terraform.tfvars not at the root of the live folder but add one in each environment folder?

First, I think you'll find that a single S3 Bucket for all Terraform state in a given AWS account works just fine. We've used that pattern for multiple clients, and since Terragrunt automatically fills in the paths to each module's remote state, it works great. Is there a reason this approach doesn't work as well for you as the "multiple tfvars files" approach?

If you still want to customize your remote state per environment, then yes, you can create a "root" terraform.tfvars in each environment folder so that calls to find_in_parent_folders() will find the terraform.tfvars file in each environment folder.

One word of caution, there is only one level of "include" that happens when you use the following in your terraform.tfvars:

terragrunt = {
  include {
    path = "${find_in_parent_folders()}"
  }

The terraform.tfvars file that's found by find_in_parent_folders() cannot itself include from anything, so there won't be any concept of "global" settings. You'd have to redundantly copy any of those between your various terraform.tfvars files.

@mootpt
Copy link

mootpt commented May 24, 2017

Just incase anyone hits the same snag:

Terraform was prompting me for my bucket name, key, and region when using:

terraform {
  # The configuration for this backend will be filled in by Terragrunt
  backend "s3" {}
}

along with a prefilled terraform.tfvars file that included my terragrunt remote config.

The culprit was: https://www.terraform.io/docs/commands/init.html#backend-true

This value needed to be set to false to properly initialize using the backend config found in my tfvars file.

@woods
Copy link

woods commented May 28, 2017

@brikis98 @josh-padnick With the patterns described above, where does the "provider" section go? Ordinarily, I'd have a stanza like this in my main.tf:

provider "aws" {
  profile = "myapp"  # Use ~/.aws/credentials
  region  = "us-east-1"
}

It feels like that would need to be present in each of the modules subdirectories, probably in main.tf each time. But that seems to go against Terragrunt's push for DRY.

I do have a global module for things like the network and basic DNS, but if I only list the aws provider there, then it seems like it wouldn't be present for when I run the other modules individually from the live repo.

@brikis98
Copy link
Member

It feels like that would need to be present in each of the modules subdirectories, probably in main.tf each time. But that seems to go against Terragrunt's push for DRY.

Yup, that's correct. If you want to avoid repeating the settings, expose them as variables that you provide from some common.tfvars file.

@conorgil
Copy link
Contributor

@josh-padnick @brikis98 @dpetzel @tomstockton @woods @dfcarpenter

I opened #234 for discussion on a WIP example that I put together to highlight some of the basics for a Terragrunt "best practices" type of thing. I'd appreciate any and all comments on it to see if I am on the right track to creating something useful for the community.

@brikis98
Copy link
Member

brikis98 commented Jul 2, 2017

OK, I just put together a couple repos we can use for large, somewhat real-world examples:

https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example
https://github.com/gruntwork-io/terragrunt-infrastructure-live-example

Hopefully, this provides some useful guidance for how to use Terragrunt. Feedback and PRs are very welcome!

@hjia1
Copy link

hjia1 commented May 11, 2021

Hi @brikis98 , I tried your sample code, but it seems that it will fail to retrieve region.hcl, env.hcl from the root terragrunt.hcl. Or did I missed something?

Sample error messages:
Error in function call; Call to function "read_terragrunt_config" failed: /home/hjia/dev/tg-payments/live/terragrunt.hcl:5,40-63: Error in function call; Call to function "find_in_parent_folders" failed: ParentFileNotFound: Could not find a region.hcl in any of the parent folders of /home/hjia/dev/tg-payments/live/terragrunt.hcl. Cause: Traversed all the way to the root...

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

9 participants