-
-
Notifications
You must be signed in to change notification settings - Fork 966
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it possible to use modules from Terraform Registry #311
Comments
We use |
When using git as a source, it is set to:
When using local directory as well as terraform registry:
Terraform understands if a directory is not present then it looks into the Terraform registry. Similar logic has to be implemented in Terragrunt when creating |
How to use Terragrunt with Terraform Registry modules without rewriting them to git URLs? |
We use HashiCorp's go-getter library to format URLs. Maybe the newer version supports registry URLs?
No way to do so now AFAIK |
AFAIK Terraform 0.11 will have better support for the registry, and for now, I will use git URLs. Thank you! |
How can you use git urls to the registry modules when those modules don't include the
parts that are necessary to use a module within terragrunt? I tried defining a module in my modules repo, but that doesn't seem to work. I have prod/vpc/main.tf which looks like this:
But when I try to do any terragrunt commands, it blows up with errors. So it seems like there is no way to use the module repository with terragrunt? Doesn't that pretty much defeat the purpose of terragrunt and its emphasis on DRY when I have to completely duplicate the code in a repository module in my own module in order to use terragrunt? |
@sgendler-stem I have just published the minimalistic code I use for this: https://gist.github.com/antonbabenko/2ca1225589c7c6d42f476f97d779d4ff |
What errors? |
The error @sgendler-stem is referring to was probably related to file named |
What do you mean? Terragrunt doesn't "provide" any |
It is not a problem of Terragrunt, but how files are named in main.tf (prod/vpc/main.tf) and the one provided by Terraform VPC module. Point 2 |
Sorry, still not entirely following. Are you saying that @sgendler-stem has a If so, then yes, one of the |
I'm not certain I understand the fix yet, as I haven't played with it yet,
but Anton is correct that I have a prod/vpc directory in my 'modules' repo
(we need a better vocabulary to differentiate between actual terraform
modules and a subdirectory in the live repo for terragrunt) which contains
a main.tf that then calls out to a module called vpc which also includes a
main.tf. If I'm understanding the comment stream here, I guess
terragrunt/terraform is pulling everything into a single directory
structure somewhere and one vpc/main.tf file is overwriting the other
rather than co-existing with it. That's an easy enough fix to make on my
end, but definitely worth documenting somewhere in the terragrunt docs, as
I can only imagine that naming a directory in the modules repo after the
module that it uses to do work is a reasonably common pattern and the error
messaging around it isn't very clear. It'd be fantastic if terragrunt
could detect the name conflict and rename one of the files, but even just
providing an explicit error message would short circuit a lot of the trial
and error experimentation I ended up doing to try to fix it.
In the meantime, I just reached out about getting access to the IAC library
via gruntwork's website. We may go with the reference architecture, too,
since it doesn't have to save me much time to be cost effective, but I
figure the library code will help me determine if I need it. We were
wanting to migrate to kubernetes from ECS and we do use elasticsearch,
neither of which are currently supported by your architecture, so we'll
have some development pain or costs, regardless. The actual operating
environments for our app are standard enough to be easy to implement. It's
things like the devops VPC with VPN access and connections to all other
environments, and a fully functional CI/CD pipeline which are more complex
and time-consuming to automate. Getting some best practices guidance on
things like secrets management would also be really helpful.
…On Tue, Oct 31, 2017 at 4:45 AM, Yevgeniy Brikman ***@***.***> wrote:
Sorry, still not entirely following. Are you saying that @sgendler-stem
<https://github.com/sgendler-stem> has a main.tf file in the local folder
that includes terraform-aws-modules/vpc/aws as a module *and* a
terraform.tfvars file that sets the source param *also* to
terraform-aws-modules/vpc/aws?
If so, then yes, one of the main.tf files would override the other. The
solution, of course, is to either (a) put the local main.tf file into a
separate folder/repo and point the source param in terraform.tfvars to
that folder/repo or (b) not use Terragrunt at all and just directly run
init, plan, and apply on the local main.tf.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#311 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AdYOqJLbBWE4MLwA2cnH2H-lgu5FJ6piks5sxwhugaJpZM4PxXd4>
.
|
That depends on how you have Terragrunt configured. What does your
The IAC library has code to set up VPCs, OpenVPN, and Jenkins, which will save you a bunch of time, but it's the Ref Arch that ties all those pieces together into an end-to-end solution, which is also a big time saver. Whether the Ref Arch make sense for you depends on your needs, so feel free to email me if you want more info! |
In my live repo, I have
live-repo/
terraform.tfvars:
terragrunt = {
terraform {
extra_arguments "retry_lock" {
commands = ["${get_terraform_commands_that_need_locking()}"]
arguments = ["-lock-timeout=20m"]
}
}
remote_state {
backend = "s3"
config {
bucket = "stem-terraform-state-us-west-2"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-lock-table"
}
}
}
prod/
vpc/
terraform.tfvars:
terragrunt = {
terraform {
source = "git::ssh://
git@github.com/stems/stem-infra.git//vpc?ref=refs/heads/master"
}
include {
path = "${find_in_parent_folders()}"
}
}
name = "prod"
region = "us-west-2"
cidr = "10.0.0.0/16"
azs = ["us-west-2a", "us-west-2b", "us-west-2c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]
database_subnets = ["10.0.21.0/24", "10.0.22.0/24", "10.0.23.0/24"]
elasticache_subnets = ["10.0.31.0/24", "10.0.32.0/24", "10.0.33.0/24"]
create_database_subnet_group = true
enable_nat_gateway = true
enable_s3_endpoint = true
enable_dynamodb_endpoint = false
enable_dns_hostnames = true
enable_dns_support = true
Then in my modules repo, I have the following:
modules-repo/
vpc/
main.tf:
terraform {
backend "s3" {}
}
provider "aws" {
region = "${var.region}"
}
And then I had something like
module "vpc" {
source = path/to/vpc/module
var1 = "${var.var1}"
var2 = "${var.var2}"
... and many more variables all copied straight through
}
With a variables.tf file copied straight from the module in question, and
an outputs.tf file which was nearly identical to the outputs.tf from the
module, but with the output value prepended with "module.vpc."
It was a lot of typing to set up and seemed like the antithesis of DRY -
have to modify local variables.tf and outputs.tf to match every change to
the remote version in the module registry, and have to copy variables
through manually. But then it didn't work, anyway, due to the file name
conflict.
It still isn't clear that I wouldn't have all of that copying to do.
Anton's example didn't include it, but it wasn't clear that it was correct,
either, since the terraform.tfvars file pointed directly to the module
source, not to my local module repo, where the renamed main.tf file would
be located. I haven't had a chance to experiment with it, yet.
…On Tue, Oct 31, 2017 at 11:30 AM, Yevgeniy Brikman ***@***.*** > wrote:
If I'm understanding the comment stream here, I guess terragrunt/terraform
is pulling everything into a single directory structure somewhere and one
vpc/main.tf file is overwriting the other rather than co-existing with it.
That depends on how you have Terragrunt configured. What does your
terraform.tfvars file contain? What is your directory structure?
It's things like the devops VPC with VPN access and connections to all
other environments, and a fully functional CI/CD pipeline which are more
complex and time-consuming to automate.
The IAC library has code to set up VPCs, OpenVPN, and Jenkins, which will
save you a bunch of time, but it's the Ref Arch that ties all those pieces
together into an end-to-end solution, which is also a big time saver.
Whether the Ref Arch make sense for you depends on your needs, so feel free
to email me if you want more info!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#311 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AdYOqFitCZy-31FEUKKpq0vWUHJQUq4Zks5sx2dhgaJpZM4PxXd4>
.
|
Based on your usage, I don't see how there could be a file name conflict. Terragrunt will download the module from your vpc-modules repo into a tmp folder. It will then call Could you paste the log output from running Terragrunt? |
I will if I run into it again. I just got access to the gruntworks library
so I’m going to dig into that. If I have more problems, I’ll let you know,
but until I’ve tried some of these suggestions, I don’t want to waste
anyone’s time since I don’t have code that looks like that right now, so I
have no way to validate what I’m saying. I eventually just copied the
module source from the registry to my modules repo and then just configured
it via tfvars, which got me over the hump in my experiments last night. But
I have to be able to make it work with my newly licensed library modules,
so I’ll either figure it out from the docs or make a support request.
…On Tue, Oct 31, 2017 at 12:25 Yevgeniy Brikman ***@***.***> wrote:
But then it didn't work, anyway, due to the file name conflict.
Based on your usage, I don't see how there could be a file name conflict.
Terragrunt will download the module from your vpc-modules repo into a tmp
folder. It will then call terraform init on it, which should result in
the VPC module being downloaded from the Terraform registry into a
.terraform folder within the tmp folder.
Could you paste the log output from running Terragrunt?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#311 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AdYOqJ01rqCXrBxW9dtLfvebjs0NA2Z7ks5sx3QYgaJpZM4PxXd4>
.
|
OK, keep us posted! You may find these two example repos helpful with setting up Terragrunt: https://github.com/gruntwork-io/terragrunt-infrastructure-live-example |
I still cannot get ANYTHING terragrunt to function correctly. I seem to be bouncing off as many as a half-dozen separate bugs, so my different attempts to workaround problems always just bang into another bug. I've had emails, phone conversations, purchased the gruntworks library, and read every word of available documentation but I'll be damned if I can get terragrunt to work in even the most basic context. Here's the VERY long email I just sent covering every single variation and the resulting error messages This email actually maybe encapsulates 4 or 5 separate terraform and terrragrunt bugs, but it is possible that one bug is actually causing a domino effect to all the others. I cannot separate them since I cannot get even the most basic functionality to work when it comes to terragrunt.
When I specify a -terraform-source with the double-slash after the directory that represents the top of my modules repo, I get a warning that terraform was initialized in an empty directory and the path it uses in the logs is then missing the directory name ater the double-slash. If I leave the double-slash off, it actually works correctly, but it also emits a warning about missing double-slash in the terragrunt-source value (ignore the error at the end and look at the paths in the text output):
If I leave off the double-slash, I get the following output (the same holds true if I use absolute or relative path and if I include a trailing slash or not):
Meanwhile, if I commit my code in the module repo, push it to origin, and then remove the terragrunt-source directive from the command entirely, I get a completely different error: In the module repo:
Now in the live repo:
It would seem that that error is because Terragrunt is totally failing to see that the file in the module repo has changed. I know it is using the correct branch for 2 reasons - first, I can see the ref in the URL. Second, the vpc_mgmt directory ONLY exists in the develop branch. It hasn't been merged into master. And those are the only 2 branches or tags. So it must be using the correct branch, but it is using an ancient version of the file that I have updated repeatedly over the last several hours while trying to figure this stuff out. When I look at the path in /private/var/folders..., I can see that there is a typo - one that I fixed 5 or 6 commits ago - and yes, I pushed the commits to origin. I even validated them via web browser. So terragrunt is most definitely failing to notice that the template in the module repo has been updated. It doesn't matter how long I wait after the commit before running init - so it isn't an eventual consistency problem. Additionally, that command you gave me for inclusion in the terraform.tfvars file for setting extra tfvars files is not getting applied when I run My modules repo looks like this: stem-infra main.tf looks like this:
My live repo looks like this: stem-envs/aws1 My outermost terraform.tfvars is as follows:
The vpc_mgmt/terraform.tfvars file looks like this:
env.tfvars has:
region.tfvars has:
And for my final problem, I cannot find any mention, in any documentation anywhere, of a way to pass maps around as variables. How do I do the assignment? I can assign a list like this:
but
results in syntax errors. The documentation about variables AND the documentation about modules fails to provide an example of passing a list or a map from a template variable through to a module variable. It looks like maybe your modules just use the list assignment syntax, so maybe casting a list to a map does the correct thing automatically, but some documentation would be nice. Google searches result in old bugs about total inability to pass maps at all, but I think those are out of date. I'd test it and just see what happens but I can't because nothing works - and I have no idea if the reason nothing works is because of the maps. Or, at the very least if maps cannot be passed, there must be standard workaround for converting to strings and then parsing them back out, since I'm sure you cannot build your library without the ability to pass maps at all. It would be nice if such methods were published somewhere. The same goes for any other standard workarounds for functional deficiencies. I'm starting to be convinced that the problem isn't user error, here, and that there are a whole host of bugs in the current release of terragrunt that actually prevent it from working at all, no matter how it is set up. Maybe it works once it is initialized and working, but it doesn't seem able to accomplish initialization. I sure cannot figure out any way to specify a source, either in terraform.tfvars or in -terragrunt-source, so I cannot work in github AND I cannot work on the local filesystem. I also cannot get terragrunt to recognize changes in the repo, so it would be trivially easy to accidentally push broken or old infrastructure once I did have working config. Frankly, despite all of my positive feedback earlier today, the fact that I cannot get even the most basic of configuration to actually function after several days of time-wasting, many emails and posts to issues, a 90 minute support call, and reading every piece of documentation that is out there doesn't exactly give me warm and funzzy feelings about using terraform or terragrunt in a production environment. I still have yet to get a single thing to work (and I'm not exactly a novice when it comes to devops and configuration and infrastructure management), whether using publicly available modules or your gruntworks library and whether I use the practices in Evgeniy's book or incorporate some of the changes we discussed today. It just doesn't work, and I'll be damned if I can uncover a reason. |
@sgendler-stem Thank you for all the details, and I'm really sorry about the trouble you've been having. We actively maintain Terragrunt, have a battery of automated tests, and an active community of worldwide users, so this level of frustration is surprising. First, there seem to be several cases where your code is returning Terraform errors, not Terragrunt errors. For example:
are both Terraform errors, and:
is invalid HCL syntax. That being said, it sounds like the temp folder where Terragrunt downloads module files may not be updating correctly, which would definitely be a Terragrunt issue. Here are some immediate suggestions for a resolution:
I wish I had a more definitive resolution, but hopefully this gives you some leads. In the meantime, send us your full Terraform config, and we'll try to reproduce your issue. If we can reproduce your bugs, we'll gladly fix those. Either way, we'll work this through to completion and identify the underlying causes of confusion, which surely will or already have tripped up others. The fact that you reached this level of frustration suggests that Terragrunt perhaps does not provide enough "guard rails" or makes a fundamental assumption that's not clear to the end user, so we'll consider what we can do to avoid those issues. Thank you for your feedback! |
Unfortunately, there's nothing in your response that I'm not already aware of. The terraform errors are a result of terragrunt errors. For example, since init isn't listed as a command that requires vars, it isn't picking up aws_region and vpc_name. The -from-module flag NEVER includes the module directory, whether I use // or / in the github url OR the terragrunt-source directory specification. I'd love to use -terragrunt-source so I don't have to fight with github for every minor change, but terragrunt-source would have to actually work correctly for me to do that. I eventually fixed the lack of env.tfvars and region.tfvars by specifying the commands as:
in order to add init to the list of commands that need vars - since I cannot use ${concat()} in a terraform.tfvars file, which seemed the more natural way to add items to a list. Your examples of HCL syntax still don't explain how to assign lists and maps that are already contained in template variables, when trying to pass them to a module. I know I can do this:
because that's pretty much the sum-total of documentation of how to initialize variables in all terraform and terragrunt docs. and it seems that if I want to pass a list, I do the following:
But I picked that up by seeing examples rather than because it seems to be documented anywhere. So it seemed natural to pass a map by doing the following:
but that results in a syntax error. Maybe it's possible that this is perfectly legal (no quotes, so the value is just assigned straight across?)
but this next variant would surely create a string rather than a list, no? And I have the impression that variable interpolation only happens inside quoted strings, so I'm not sure how this wouldn't result in a string being assigned to var4
which implies that the previous (var3) mechanism actually coerces the list to a comma-delimited string and then parses that string when it is passed as the value of a list, since it certainly doesn't seem to create a list of a list, which is what I would otherwise expect. So that leaves me with two assumptions - that I cannot pass a map from a template var to a module var or else that I have to do so by coercing the map to a list of alternating key and value and then parse that back into a map. If I'm guessing something like this might work:
coercing the map to a list and the list to a map. and it is possible that if var6 inside the module is a map type, that simply doing this ```var6 = ["${var.another_map_var}"] will cause the coercion to map to happen correctly when var6 is passed to the module, but again, that kind of coercion and parsing shouldn't be happening without some kind of explanation in a document somewhere - that's an enormous amount of coding by trial and error to figure out since it is far from the most obvious place to start, and for a feature that every single user is likely to get stuck on within their first day of using terragrunt and your library if they follow your suggested best-practices, since every module is going to get called from a template and pretty much every module receives at least one map in an input variable, so this problem must be encountered constantly by new users. None of the isted tricks seemed to work, incidentally, since I'm setting a map for public and private subnets in my terraform.tfvars file, then in my vpc template's main.tf, I try to pass that to your vpc-mgmt module, but it is apparently not working because I am not picking up the subnets I specify in the map I am passing in. I believe I have tried every variant suggested in this comment, so I still do not see how I can get a map from terraform.tfvars in my live repo to be passed into my template vars and then into my module - the kind of thing I'd have thought might be covered in the first page of documentation for how to use your library, honestly, since absolutely anyone attempting to use it will surely have a structure pretty much identical to mine. |
And for what it is worth, purging all of the source files results in huge delays while I wait 20+ minutes for provider plugins to download from Hashicorp. I'm not sure why those are so slow, but they are. The null provider required 5-10 minutes and each subsequent provider requires 2 or 3 minutes. It's very frustrating and completely eliminates the possibility of doing anything quickly, but I have to do it on every command or else I don't pick up the latest changes from github. It's not a networking problem, as I can pull the zip files from releases.hashicorp.com in a second or less. It's really not clear exactly what it is spending all of that time on. I can download the aws provider from the server in a half second, but downloading the plugin via terraform/terragrunt requires 10+ minutes. |
Finally, to add to the list of bugs/documentation problems, I included 5 or 6 subnets in my map, under the assumption that it would use 4 of them because that's how many availability zones there are. Instead, it seems to have used the size of my map to generate public and private subnets for imaginary availability zones, even while it didn't actually use the cidr blocks that my map provided for those subnets - public or private. That's a little disappointing. Here's the bottom section of my plan output:
And here are the subnets that should have been assigned - I know the map must have been passed through correctly, since it does create 6 subnets in a region that only has 4 AZs, but the cidr blocks have no relationship to the cidr blocks in the maps I passed in.
|
Caught the typo in my CIDR blocks - but it didn't fix anything in the cidr assignments.
It turns out that my variable names had a typo in them - subnets shouldn't be plural. Is there a way to get terraform to be strict about variables passed to modules so that if a variable is passed in which is not in the inputs for the module it generates an error rather than silently using defaults? |
There are just too many interconnected issues to here respond to them effectively async. I'll PM you with a link and if you're available, we can chat real-time to resolve some of these issues. |
I have everything working, finally. Some were simply typos in variable names (a strict mode would be super helpful here, as the error messages, if they exist at all, are often not that useful). I worked around 'info' not being in the list of commands that require vars by appending it to the list. module_map_var = "${var.template_map_var}" in the module block actually seems to do the correct thing, though it took me forever to get things running well enough to determine that since there was no documentation to suggest it and many list examples seemed to indicate that it wouldn't work correctly, since they all use square brackets around the interpolation string. The --terragrunt-source flag and the source parameter in the module block do appear to be broken - the '//' that is supposed to precede the module name doesn't seem to get handled correctly and it generates warnings about empty directories - but then it does the correct thing under the hood, pulling the whole repo into the 'empty' directory, which results in the correct thing being pulled into the temp dir - but that is perhaps merely a lucky artifact resulting from using a module repo that keeps each module in the root of the repo, so it creates an empty temp dir and then pulls the entire content of the module repo into that empty temp dir. Since that creates a directory called vpc_mgmt as an accident of my naming convention, I seem to get lucky and terraform does the right thing even though the -from-module flag is incorrect in the terraform command, since the directory gets created accidentally. So it is working, but will cease to work as soon as I try to load a template that isn't one level deep inside my module repo or I use a template in a directory that doesn't match the name in the live repo. There are loads of warnings in the output, but at least it accidentally works. provider downloads are super slow and I don't seem able to trust terragrunt/terraform to correctly detect changes in the module repo, whether using github/ssh or --terragrunt-source override, so I'm forced to suffer the slow downloads by clearing the temp files on every run. |
Great to see the issues are resolved.
Ok, glad to see at least some items in here are user error. :)
This is a general issue with Terraform (see #14324, #15377, and #15053), though I generally find the errors around missing variables are usually pretty clear. I'm not sure what Terragrunt could really do here...
I think you mean
Yeah, the syntax for denoting lists and maps is inconsistent in Terraform. I can see how this could lead to some confusion while you're wrestling other issues.
This sounds like it could be a Terragrunt issue, although it may be an issue with your template, or with Terraform. Can you paste the output that seems to be behaving strangely? Then we can determine whether this is the root issue that motivated this thread, a separate issue, or a mistake/bug elsewhere. It may also help to know that we are about to merge #340 to resolve #334. One issue we've seen is that the OS will delete the files in a temp folder, but not the folder itself, which would sometimes confuse Terragrunt.
I'm not experiencing that on my machine using all the same providers and git repos you are, so I'm not sure how to explain this one. Next Steps Now that you're more familiar with some of the Terraform syntax quirks, it will hopefully be easier to separate Terraform errors from Terragrunt issues. |
Here's the full output from the one and only run of apply that I've ever
executed - you can see that it is generating warnings about the '//' in the
source and claims that the resulting temp directory is empty (because it
is) before it pulls the whole github repo into that empty directory,
effectively creating the directory that was otherwise not created - only
because my template directory and the the directory name in the live repo
happen to be the same, as far as I can tell. I rendered the bits that are
pertinent in bold. I'm pretty sure you could replicate this locally and
possibly also break things entirely by just renaming the template file to
something different from the name of the directory in the live repo or vice
versa or by nesting the template directories in the module repo deeper
inside the repository so that they don't end up landing in the correct
place in the temp dir. That's just a hunch, though.
$ terragrunt apply --terragrunt-source
*../../../../../stem-infra//vpc_mgmt/*
[terragrunt]
[/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt]
2017/11/01 22:51:29 Running command: terraform --version
[terragrunt] 2017/11/01 22:51:29 Reading Terragrunt config file at
/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/terraform.tfvars
[terragrunt] 2017/11/01 22:51:29 Cleaning up existing *.tf files in
/var/folders/xr/t6gsrby97350k0r85qr7blzh0000gn/T/terragrunt/1-Fiw_rVrVmzIDHkzOu35z2CQn0/N7YR7JXFv_AHrQ_vUpC9GGTlLbM
[terragrunt] 2017/11/01 22:51:29 *Downloading Terraform configurations from
file:///Users/sgendler/src/stem/stem-infra* into
/var/folders/xr/t6gsrby97350k0r85qr7blzh0000gn/T/terragrunt/1-Fiw_rVrVmzIDHkzOu35z2CQn0/N7YR7JXFv_AHrQ_vUpC9GGTlLbM
using terraform init
[terragrunt]
[/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt]
2017/11/01 22:51:29 Backend s3 has not changed.
[terragrunt]
[/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt]
2017/11/01 22:51:29 Running command: terraform init
-backend-config=encrypt=true
-backend-config=dynamodb_table=terraform-lock-table
-backend-config=bucket=stem-terraform-state-bucket
-backend-config=key=aws2/us-east-1/_global/vpc_mgmt/terraform.tfstate
-backend-config=region=us-west-2
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/../../../account.tfvars
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/../../region.tfvars
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/../env.tfvars
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/terraform.tfvars
-lock-timeout=20m *-from-module=file:///Users/sgendler/src/stem/stem-infra
/var/folders/xr/t6gsrby97350k0r85qr7blzh0000gn/T/terragrunt/1-Fiw_rVrVmzIDHkzOu35z2CQn0/N7YR7JXFv_AHrQ_vUpC9GGTlLbM*
Copying configuration from "*file:///Users/sgendler/src/stem/stem-infra*"...
*Terraform initialized in an empty directory!*
*The directory has no Terraform configuration files. You may begin working*
*with Terraform immediately by creating Terraform configuration files.*
[terragrunt] 2017/11/01 22:51:29 Copying files from
*/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt
into
/var/folders/xr/t6gsrby97350k0r85qr7blzh0000gn/T/terragrunt/1-Fiw_rVrVmzIDHkzOu35z2CQn0/N7YR7JXFv_AHrQ_vUpC9GGTlLbM/vpc_mgmt*
[terragrunt] 2017/11/01 22:51:30 Setting working directory to
/var/folders/xr/t6gsrby97350k0r85qr7blzh0000gn/T/terragrunt/1-Fiw_rVrVmzIDHkzOu35z2CQn0/N7YR7JXFv_AHrQ_vUpC9GGTlLbM/vpc_mgmt
[terragrunt] 2017/11/01 22:51:30 Backend s3 has not changed.
[terragrunt] 2017/11/01 22:51:30 Running command: terraform apply
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/../../../account.tfvars
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/../../region.tfvars
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/../env.tfvars
-var-file=/Users/sgendler/src/stem/stem-envs/aws2/us-east-1/_global/vpc_mgmt/terraform.tfvars
-lock-timeout=20m
data.aws_availability_zones.available: Refreshing state...
data.template_file.num_availability_zones: Refreshing state...
module.mgmt_vpc.aws_vpc.main: Creating...
assign_generated_ipv6_cidr_block: "" => "false"
cidr_block: "" => "10.0.0.0/16"
default_network_acl_id: "" => "<computed>"
default_route_table_id: "" => "<computed>"
default_security_group_id: "" => "<computed>"
dhcp_options_id: "" => "<computed>"
enable_classiclink: "" => "<computed>"
enable_classiclink_dns_support: "" => "<computed>"
enable_dns_hostnames: "" => "true"
enable_dns_support: "" => "true"
instance_tenancy: "" => "default"
ipv6_association_id: "" => "<computed>"
ipv6_cidr_block: "" => "<computed>"
main_route_table_id: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "mgmt"
module.mgmt_vpc.aws_vpc.main: Creation complete after 9s (ID: vpc-b03e96c8)
module.mgmt_vpc.aws_route_table.private[0]: Creating...
route.#: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "mgmt-private-0"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_route_table.private[2]: Creating...
route.#: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "mgmt-private-2"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_route_table.public: Creating...
propagating_vgws.#: "" => "<computed>"
route.#: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "mgmt-public"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_route_table.private[1]: Creating...
route.#: "" => "<computed>"
tags.%: "" => "1"
tags.Name: "" => "mgmt-private-1"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_subnet.public[1]: Creating...
assign_ipv6_address_on_creation: "" => "false"
availability_zone: "" => "us-east-1b"
cidr_block: "" => "10.0.10.0/22"
ipv6_cidr_block: "" => "<computed>"
ipv6_cidr_block_association_id: "" => "<computed>"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "mgmt-public-1"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_subnet.public[0]: Creating...
assign_ipv6_address_on_creation: "" => "false"
availability_zone: "" => "us-east-1a"
cidr_block: "" => "10.0.1.0/22"
ipv6_cidr_block: "" => "<computed>"
ipv6_cidr_block_association_id: "" => "<computed>"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "mgmt-public-0"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_subnet.private[1]: Creating...
assign_ipv6_address_on_creation: "" => "false"
availability_zone: "" => "us-east-1b"
cidr_block: "" => "10.0.110.0/22"
ipv6_cidr_block: "" => "<computed>"
ipv6_cidr_block_association_id: "" => "<computed>"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "mgmt-private-1"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_internet_gateway.main: Creating...
tags.%: "0" => "1"
tags.Name: "" => "mgmt"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_subnet.public[2]: Creating...
assign_ipv6_address_on_creation: "" => "false"
availability_zone: "" => "us-east-1c"
cidr_block: "" => "10.0.20.0/22"
ipv6_cidr_block: "" => "<computed>"
ipv6_cidr_block_association_id: "" => "<computed>"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "mgmt-public-2"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_subnet.private[2]: Creating...
assign_ipv6_address_on_creation: "" => "false"
availability_zone: "" => "us-east-1c"
cidr_block: "" => "10.0.120.0/22"
ipv6_cidr_block: "" => "<computed>"
ipv6_cidr_block_association_id: "" => "<computed>"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "mgmt-private-2"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_route_table.private[2]: Creation complete after 2s (ID:
rtb-a47c32de)
module.mgmt_vpc.aws_subnet.private[0]: Creating...
assign_ipv6_address_on_creation: "" => "false"
availability_zone: "" => "us-east-1a"
cidr_block: "" => "10.0.100.0/22"
ipv6_cidr_block: "" => "<computed>"
ipv6_cidr_block_association_id: "" => "<computed>"
map_public_ip_on_launch: "" => "false"
tags.%: "" => "1"
tags.Name: "" => "mgmt-private-0"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_route_table.public: Creation complete after 3s (ID:
rtb-2078365a)
module.mgmt_vpc.aws_vpc_endpoint.s3-public: Creating...
cidr_blocks.#: "" => "<computed>"
policy: "" => "<computed>"
prefix_list_id: "" => "<computed>"
route_table_ids.#: "" => "1"
route_table_ids.3869909465: "" => "rtb-2078365a"
service_name: "" => "com.amazonaws.us-east-1.s3"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_route_table.private[0]: Creation complete after 3s (ID:
rtb-64632d1e)
module.mgmt_vpc.aws_route_table.private[1]: Creation complete after 3s (ID:
rtb-07632d7d)
module.mgmt_vpc.aws_vpc_endpoint.s3-private: Creating...
cidr_blocks.#: "" => "<computed>"
policy: "" => "<computed>"
prefix_list_id: "" => "<computed>"
route_table_ids.#: "" => "3"
route_table_ids.1216147752: "" => "rtb-07632d7d"
route_table_ids.2661709090: "" => "rtb-64632d1e"
route_table_ids.625676261: "" => "rtb-a47c32de"
service_name: "" => "com.amazonaws.us-east-1.s3"
vpc_id: "" => "vpc-b03e96c8"
module.mgmt_vpc.aws_subnet.public[2]: Creation complete after 3s (ID:
subnet-c86b8195)
module.mgmt_vpc.aws_subnet.public[1]: Creation complete after 3s (ID:
subnet-310fbc7a)
module.mgmt_vpc.aws_subnet.private[2]: Creation complete after 3s (ID:
subnet-77678d2a)
module.mgmt_vpc.aws_subnet.public[0]: Creation complete after 3s (ID:
subnet-797a8b56)
module.mgmt_vpc.aws_route_table_association.public[1]: Creating...
route_table_id: "" => "rtb-2078365a"
subnet_id: "" => "subnet-310fbc7a"
module.mgmt_vpc.aws_route_table_association.public[0]: Creating...
route_table_id: "" => "rtb-2078365a"
subnet_id: "" => "subnet-797a8b56"
module.mgmt_vpc.aws_route_table_association.public[2]: Creating...
route_table_id: "" => "rtb-2078365a"
subnet_id: "" => "subnet-c86b8195"
module.mgmt_vpc.aws_internet_gateway.main: Creation complete after 3s (ID:
igw-0348f57a)
module.mgmt_vpc.aws_eip.nat: Creating...
allocation_id: "" => "<computed>"
association_id: "" => "<computed>"
domain: "" => "<computed>"
instance: "" => "<computed>"
network_interface: "" => "<computed>"
private_ip: "" => "<computed>"
public_ip: "" => "<computed>"
vpc: "" => "true"
module.mgmt_vpc.aws_route.internet: Creating...
destination_cidr_block: "" => "0.0.0.0/0"
destination_prefix_list_id: "" => "<computed>"
egress_only_gateway_id: "" => "<computed>"
gateway_id: "" => "igw-0348f57a"
instance_id: "" => "<computed>"
instance_owner_id: "" => "<computed>"
nat_gateway_id: "" => "<computed>"
network_interface_id: "" => "<computed>"
origin: "" => "<computed>"
route_table_id: "" => "rtb-2078365a"
state: "" => "<computed>"
module.mgmt_vpc.aws_subnet.private[1]: Creation complete after 3s (ID:
subnet-fa08bbb1)
module.mgmt_vpc.aws_route_table_association.public[1]: Creation complete
after 1s (ID: rtbassoc-4902b734)
module.mgmt_vpc.aws_route_table_association.public[0]: Creation complete
after 1s (ID: rtbassoc-2f00b552)
module.mgmt_vpc.aws_route_table_association.public[2]: Creation complete
after 1s (ID: rtbassoc-bc06b3c1)
module.mgmt_vpc.aws_eip.nat: Creation complete after 1s (ID:
eipalloc-85c69ab0)
module.mgmt_vpc.aws_nat_gateway.nat: Creating...
allocation_id: "" => "eipalloc-85c69ab0"
network_interface_id: "" => "<computed>"
private_ip: "" => "<computed>"
public_ip: "" => "<computed>"
subnet_id: "" => "subnet-797a8b56"
module.mgmt_vpc.aws_route.internet: Creation complete after 1s (ID:
r-rtb-2078365a1080289494)
module.mgmt_vpc.aws_vpc_endpoint.s3-public: Creation complete after 1s (ID:
vpce-753e8e1c)
module.mgmt_vpc.aws_vpc_endpoint.s3-private: Creation complete after 2s
(ID: vpce-df3686b6)
module.mgmt_vpc.aws_subnet.private[0]: Creation complete after 3s (ID:
subnet-77649558)
module.mgmt_vpc.aws_route_table_association.private[0]: Creating...
route_table_id: "" => "rtb-64632d1e"
subnet_id: "" => "subnet-77649558"
module.mgmt_vpc.aws_route_table_association.private[2]: Creating...
route_table_id: "" => "rtb-a47c32de"
subnet_id: "" => "subnet-77678d2a"
module.mgmt_vpc.aws_route_table_association.private[1]: Creating...
route_table_id: "" => "rtb-07632d7d"
subnet_id: "" => "subnet-fa08bbb1"
module.mgmt_vpc.aws_route_table_association.private[1]: Creation complete
after 0s (ID: rtbassoc-ef0bbe92)
module.mgmt_vpc.aws_route_table_association.private[2]: Creation complete
after 0s (ID: rtbassoc-2e00b553)
module.mgmt_vpc.aws_route_table_association.private[0]: Creation complete
after 0s (ID: rtbassoc-a4f440d9)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (10s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (20s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (30s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (40s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (50s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (1m0s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (1m10s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (1m20s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Still creating... (1m30s elapsed)
module.mgmt_vpc.aws_nat_gateway.nat: Creation complete after 1m32s (ID:
nat-0711956af38d29715)
module.mgmt_vpc.aws_route.nat[2]: Creating...
destination_cidr_block: "" => "0.0.0.0/0"
destination_prefix_list_id: "" => "<computed>"
egress_only_gateway_id: "" => "<computed>"
gateway_id: "" => "<computed>"
instance_id: "" => "<computed>"
instance_owner_id: "" => "<computed>"
nat_gateway_id: "" => "nat-0711956af38d29715"
network_interface_id: "" => "<computed>"
origin: "" => "<computed>"
route_table_id: "" => "rtb-a47c32de"
state: "" => "<computed>"
module.mgmt_vpc.aws_route.nat[1]: Creating...
destination_cidr_block: "" => "0.0.0.0/0"
destination_prefix_list_id: "" => "<computed>"
egress_only_gateway_id: "" => "<computed>"
gateway_id: "" => "<computed>"
instance_id: "" => "<computed>"
instance_owner_id: "" => "<computed>"
nat_gateway_id: "" => "nat-0711956af38d29715"
network_interface_id: "" => "<computed>"
origin: "" => "<computed>"
route_table_id: "" => "rtb-07632d7d"
state: "" => "<computed>"
module.mgmt_vpc.aws_route.nat[0]: Creating...
destination_cidr_block: "" => "0.0.0.0/0"
destination_prefix_list_id: "" => "<computed>"
egress_only_gateway_id: "" => "<computed>"
gateway_id: "" => "<computed>"
instance_id: "" => "<computed>"
instance_owner_id: "" => "<computed>"
nat_gateway_id: "" => "nat-0711956af38d29715"
network_interface_id: "" => "<computed>"
origin: "" => "<computed>"
route_table_id: "" => "rtb-64632d1e"
state: "" => "<computed>"
module.mgmt_vpc.aws_route.nat[0]: Creation complete after 1s (ID:
r-rtb-64632d1e1080289494)
module.mgmt_vpc.aws_route.nat[1]: Creation complete after 2s (ID:
r-rtb-07632d7d1080289494)
module.mgmt_vpc.aws_route.nat[2]: Creation complete after 2s (ID:
r-rtb-a47c32de1080289494)
module.mgmt_vpc.null_resource.vpc_ready: Creating...
module.mgmt_vpc.null_resource.vpc_ready: Creation complete after 0s (ID:
3854081390465364666)
Apply complete! Resources: 27 added, 0 changed, 0 destroyed.
Releasing state lock. This may take a few moments...
Outputs:
availability_zones = [
us-east-1a,
us-east-1b,
us-east-1c
]
nat_gateway_public_ips = [
34.237.252.19
]
num_availability_zones = 3
private_subnet_cidr_blocks = [
10.0.100.0/22,
10.0.108.0/22,
10.0.120.0/22
]
private_subnet_ids = [
subnet-77649558,
subnet-fa08bbb1,
subnet-77678d2a
]
private_subnet_route_table_ids = [
rtb-64632d1e,
rtb-07632d7d,
rtb-a47c32de
]
public_subnet_cidr_blocks = [
10.0.0.0/22,
10.0.8.0/22,
10.0.20.0/22
]
public_subnet_ids = [
subnet-797a8b56,
subnet-310fbc7a,
subnet-c86b8195
]
public_subnet_route_table_id = rtb-2078365a
vpc_cidr_block = 10.0.0.0/16
vpc_id = vpc-b03e96c8
vpc_name = mgmt
vpc_ready = 3854081390465364666
…On Wed, Nov 1, 2017 at 11:46 PM, Josh Padnick ***@***.***> wrote:
Great to see the issues are resolved.
I have everything working, finally. Some were simply typos in variable
names
Ok, glad to see at least some items in here are user error. :)
(a strict mode would be super helpful here, as the error messages, if they
exist at all, are often not that useful).
This is a general issue with Terraform (see #14324
<hashicorp/terraform#14324>, #15377
<hashicorp/terraform#15377>, and #15053
<hashicorp/terraform#15053>), though I
generally find the errors around missing variables are usually pretty
clear. I'm not sure what Terragrunt could really do here...
I worked around 'info' not being in the list of commands that require vars
by appending it to the list.
I think you mean terraform init? If you run terraform init --help you'll
see that terraform init doesn't actually accept -var flags. Can you
provide any additional info on this one? Given the length of this thread,
it may be better to open that in a separate GitHub Issue.
module_map_var = "${var.template_map_var}" in the module block actually
seems to do the correct thing
Yeah, the syntax for denoting lists and maps is inconsistent in Terraform.
I can see how this could lead to some confusion while you're wrestling
other issues.
The --terragrunt-source flag and the source parameter in the module block
do appear to be broken
This sounds like it could be a Terragrunt issue, although it may be an
issue with your template, or with Terraform. Can you paste the output that
seems to be behaving strangely? Then we can determine whether this is the root
issue that motivated this thread
<#311 (comment)>,
a separate issue, or a mistake/bug elsewhere.
It may also help to know that we are about to merge #340
<#340> to resolve #334
<#334>. One issue we've
seen is that the OS will delete the *files* in a temp folder, but not the
folder itself, which would sometimes confuse Terragrunt.
provider downloads are super slow
I'm not experiencing that on my machine using all the same providers and
git repos you are, so I'm not sure how to explain this one.
*Next Steps*
To keep this thread sane, let's try to focus on the known or suspected
Terragrunt-specific issues. For other questions about using Gruntwork
packages, it's best if you use our Gruntwork support channel.
Now that you're more familiar with some of the Terraform syntax quirks, it
will hopefully be easier to separate Terraform errors from Terragrunt
issues.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#311 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AdYOqD3i1ZPg3Ykuikn73218wiHz9gdnks5syWVLgaJpZM4PxXd4>
.
|
I didn't mean that it should implements production-ready infrastructure -
that's what your reference architecture is for. I meant that it
demonstrates a working installation of all of the terraform and terragrunt
and revision control systems that would need to be in place and correctly
configured to talk to each other to start the process of piecing together
infrastructure - a production development environment, if you will.
And I'm not necessarily blaming terragrunt for my ills just because I've
mentioned them here. I'm just going through what I'm sure are typical
teething problems with a new technology and trying to suggest ways that
Gruntworks, as a promoter of terraform and vendor of terraform-based
products, could do to ease the process of exploration. When push comes to
shove, I'm adopting both terraform and terragrunt simultaneously, with
terraform being, for all intents and purposes, a dependency of terragrunt,
so it doesn't seem unnatural to look to Gruntworks and terragrunt for
assistance in resolving the issues I'm encountering while I get familiar
with terragrunt. Yes, I could buy a fully functional architecture from you
and eliminate that process up front, but then the first time I need to make
changes to our purchased architecture, I'll be fighting through the same
teething problems but in a vastly more complicated architecture, not to
mention one that would be supporting production applications - finding my
way through the process of debugging the cascade of infrastructure updates
resulting from a simple typo in a CIDR block was hard enough with just a
single module with no dependencies instantiated. I can only imagine how
daunting it would be if I made a similar mistake in a fully specified
architecture. What I'm trying to do is get myself familiar and facile
enough with the tools that I can then confidently administrate and modify
what I'll likely purchase from you.
…On Thu, Nov 2, 2017 at 5:19 PM, Yevgeniy Brikman ***@***.***> wrote:
Yeah, but they actually lack all of the important parts that I was looking
for examples of - they don't demonstrate how to access a module from within
a template since they only use resources directly within the *.tf files
Just added an example of using the Consul module
<https://registry.terraform.io/modules/hashicorp/consul/aws/0.0.5> from
the Terraform Registry:
gruntwork-io/terragrunt-infrastructure-modules-example#2
<gruntwork-io/terragrunt-infrastructure-modules-example#2>
gruntwork-io/terragrunt-infrastructure-live-example#3
<gruntwork-io/terragrunt-infrastructure-live-example#3>
It's merged now, so just browse the repos to see what it looks like.
and they don't show the cascading variable overrides
What variable overrides are you referring to?
and instead of including empty directories for _global and region and
account layers in the hierarchy in order to demonstrate the concepts, those
dirs are entirely missing
They show account (prod, non-prod) and region (us-east-1) layers. _global
isn't anything exciting. It works exactly like anything else in the account
and region layers.
What I was looking for was production-ready example repos that just have a
minimum of templates and configurations but are otherwise production-ready,
whereas those really are just quick examples of very simplistic code
organization.
Production-ready code and "minimum" do not mix. Production-ready means you
take into account security, scalability, maintainability, monitoring,
configuration, reuse, versioning, and dozens of other things, so the
example would end up quite large. Apologies for sounding like a broken
record, but if you want something actually production ready, our Reference
Architecture is what you're looking for :)
And I seem to be required to use --terragrunt-source-update or else it
never sees my updated code
Are you using Git URLs or local file paths?
And for what it is worth, I did enable the provider cache. It doesn't seem
to have helped at all. I have the cache directory and it has the binaries
in it, but it still downloads all the provider plugins on every run. I'm
assuming this is somehow the --terraform-source-update flag's doing, but I
just get constant errors if I ditch that.
I'm not sure why you're so dead set on blaming Terragrunt for all your
ills :-\
I have no clue why providers download slowly for you. Or why the cache
wouldn't be working. Terragrunt has no influence on either one.
For what it is worth, I included a perfect cut-and-paste copy of the
example code for the module for adding network ACLs to a mgmt-vpc and when
I ran it once, it creates the resources correctly on the first attempt.
When I applied it again, 2 minutes later, with absolutely no changes to
anything (I was debugging the provider cache thing), it still decided it
needed to kill a bunch of resources and re-create them, and then it exited
with an error about a network association with a particular id not existing.
Please post all of your VPC issues in the VPC repo. I don't want to
clutter this already very long thread with unrelated topics.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#311 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AdYOqIMXmSUVhCbq2Lx3R3CQMHhIkxHJks5sylwhgaJpZM4PxXd4>
.
|
Terraform 0.11 is out, and I think it is time for Terragrunt to have support for Terraform Registry URLs and versioning natively: terragrunt = {
terraform {
source = "terraform-aws-modules/vpc/aws"
version = "v1.2.0"
}
} As described in Upgrading to 0.11 Guide:
|
+1. Anyone up for a PR? |
Seems like this has fallen by the wayside. Any chance of some love? I’d love to tackle, but have zero Go chops 😞 |
We'd love to get to it, but there's a long list of feature requests for Terragrunt, and given that this one has a simple workaround (use a Git URL!), it's hard to prioritize it. PRs are welcome though :) |
@brikis98 while the Git URL works for fetching the modules, they will not work due to Terragrunt's requirement for the following which is not in the modules:
Is there a workaround for that? Would be more DRY to not require that in every module. |
@seanorama I use hooks to copy that file like this: terragrunt = {
terraform {
after_hook "copy_common_main_variables" {
commands = ["init"]
execute = ["cp", "${get_parent_tfvars_dir()}/../common/main_variables.tf", "."]
}
} |
@antonbabenko Perfect. How do you get around not having a double slash (//) when modules are in the root dir of a git repo? For example: |
I usually don't need terragrunt = {
terraform {
source = "git::git@github.com:terraform-aws-modules/terraform-aws-vpc.git?ref=v1.24.0"
}
} |
@antonbabenko Would please tell how do you fix |
Yes, I always have |
Any progress to support for Terraform Registry URLs and versioning natively: Currently I have to replace with below codes.
|
@ozbillwang I have not made any progress on this. |
I am actually using something like this:
It seems working good for me. So sounds like it does support terraform registry now? |
We tested with a private registry and latest terragrunt version: Source:
Error:
So at least for private registries it doesn't work (yet). |
@cschroer In your case |
@antonbabenko not when using a module registry imho. See https://www.terraform.io/docs/modules/sources.html#terraform-registry |
This should now be possible using a |
@brikis98 I am not sure how |
Whoops, going too quickly, this issue has nothing to do with generate block. Re-opening. |
Is this still being actively worked on? |
Is this still being worked on? Our team has had to abandon Terragrunt for a project we're starting up because we're integrating Terraform registries. |
Kinda ridiculous that you can't use terraform registry urls, a workaround I came up with is to create a git repo called "terragrunt-proxy", add an empty .tf file (commit/push to the repo), then use the generate block to generate the module using terraform syntax
|
Pls terragrunt 🙏 |
Initial version that adds support for Terraform Module Registry has been released: https://github.com/gruntwork-io/terragrunt/releases/tag/v0.31.5 (binaries should show up in 15~30 mins) There are two follow up issues that discuss some limitations of this feature. Please follow those tickets and 👍 if you are interested in those: |
Correct me if I am wrong, but it is not possible to download modules from Terraform Registry.
Error
Copying configuration from "file:///tmp/terragrunt-infrastructure-live-example/non-prod/us-east-1/qa/mysql/terraform-aws-modules/ec2-instance/aws"...
Error copying source module: error downloading 'file:///tmp/terragrunt-infrastructure-live-example/non-prod/us-east-1/qa/mysql/terraform-aws-modules/ec2-instance/aws': source path error: stat /tmp/terragrunt-infrastructure-live-example/non-prod/us-east-1/qa/mysql/terraform-aws-modules/ec2-instance/aws: no such file or directory
terragrunt version v0.13.7
The text was updated successfully, but these errors were encountered: