-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the best way to run import with terragrunt? #727
Comments
Can you elaborate a bit on your use case? E.g what would be an example of a module that is applied in the first On a side note, we generally discourage usage of the |
Okey.. it seems good arguments but I have another question: if I DON'T use -all then what would happen if I ran a terraform.tfvars that has a dependency with another module? Will it request for me to run that module first? The only reason I'm rellying in -all variants is due to dependency = [] block. Is there a way to ensure this dependency execution without -all variant? |
Ah, in that case, no the dependency blocks are only run in the So given that, can you provide some more context on the infrastructure itself? What is a module that needs to be applied first, and then another module that needs to be imported before the third module can be applied? I am thinking that you might be able to leverage the before hooks to optimistically import infrastructure in the module that needs to do it before the call to To elaborate a bit more on my point about daily operations and In most cases, you should only need to rely on the dependency ordering when you are initially setting up your infrastructure. Once your infrastructure is stood up, the cases where you need to touch your entire infrastructure should be rare. We have found that it is usually much safer to implement and update your changes one module at a time. Here is a toy example to explain the thought process: let's suppose you were deploying a VPC and a database into it. You might structure your infrastructure folder as follows:
You would then specify the Now let's consider what you might change in the infrastructure. What would be changes that you might make? Which modules would be affected by those changes? Chances are, you would rarely ever need to touch the VPC module to make most of your changes. For example, you might want to deploy an app that uses the database. For such a change, you would add a new module, and most likely the existing VPC and SQL modules will suffice. Or maybe you want to modify the port that the db uses. In this case, you need to change the DB to update the port it is listening on, and the app. In a naive deployment with downtime, you can make the requisite changes and apply them using But what if you wanted to do it with limited downtime? In that case, you will first deploy a replica of the DB listening on a different port. You will want to make sure it is deployed correctly and tested before touching the app, so you would make the changes to the sql module first. Then, once you verify the replica is working correctly, you would deploy a canary/blue-green deployment of your app that uses the replica on the new port. Once you confirm the replica works, you can failover the DB to the new one and slowly transition traffic to the new version of your app. In the latter scenario, you would be making your changes one module at a time instead of simultaneously to the DB and the app, despite the change affecting both modules. Because of this, you would most likely not use the So really, the usage of I also go back to this question when I find a use case reaching for |
So in our case we have 2 main situations where we use the dependency declaration.
So before I used to do something like:
That seems fine when you look at it but the role needs a specific policy before the module can actually be run. Sometimes the module runs when ARN is created but not necessarily when user has permission for that actual owner-link. Therefore AWS causes an error.... Now I do it like this: Example: In app_storage:
Since modules don't handle dependencies (check the issue) with depends_on I use dependency via terragrunt to explicit declare it. But you are right -all variants are used mostly first time but also I want to make something consistent because sometimes people in my team change something in a module that affect another and running the -all variantes gives FULL perspective. I was also glad with -all cause I didn't needed to do a shell script for it. A shell script with the execution order can be confusing for people that change terraform but are not very savy with it... For a person that is learning terraform... terraform in itself can be challenging.. you put a shell script with execution order and then things are more complex so we are all very glad with -all variant (except that it doesn't support landscape but this is another issue) Why do I want to import then? Well due to reason number 1. Sometimes in Kubernetes runs a command that creates something in AWS so I would like to run that module and then run on import on top of the stuff things that were created by that command but currently using the -all variant it doesn't support that. |
Is there a reason data sources don't work for your use case? If you are using Kubernetes to manage AWS resources (e.g As far as message passing across modules go, you can leverage the The idea would be to have the kube command module create the resource using You can then use At least, this would be how I would handle it. It is a bit hard for me to say if you can adapt this pattern into your modules without seeing your code, but I hope this leaves some inspiration? Let me know if this doesn't work for you!
Module dependencies are something we struggle with too, but there is a workaround that we have found works pretty well. See this comment on a related issue to the one you pointed at: hashicorp/terraform#1178 (comment) The key idea is to use |
Yes, currently I'm fetching the asset with data but I would like to import it. Of course... I could use data and then output everything and then "simulate" a import. That I could do. I suppose that answers my original question. I just wished there was an easier way.
Yes. I tried using import from state but if you are creating infra from scratch then terraform plan will output an error if the state still doesn't exist. Meaning I cannot even plan, let alone apply my infra from scratch. That's a problem so one more reason for me to like terragrunt dependency and all variants
Hummm for me this looks quite ugly but guess I could try. Thanks for the tip! Thanks for all the replies! |
In conclusion I think I will keep running -all variant... but slowly move to an opt-in non-all approach maybe relying in some shell. I prefer doing that then using the terraform hack for module depedency issue... I found this to be super ugly. For the imports I'll continue using data and outputing instead of the import. |
I closed this by mistake.. reopening... |
This is the annoying part about the That said,
One thought. If you really want to implement the
I agree. I really hope they implement this in the core. |
Thanks for the kind words! Glad that you are finding it useful! |
For completeness, here is how I ended up solving it with hooks, in a operating system agnostic way (devs run terragrunt manually on windows, CICD runs terragrunt on linux) using a python script: MIT licensed use this however you want. in my terraform {
# terragrunt will bootstrap for us and create these resources
# but we want to further modify them, so we need to import them
after_hook "import_s3_state_bucket" {
commands = ["init"]
execute = ["py", "${get_parent_terragrunt_dir()}/../bin/import-resource-hook", "aws_s3_bucket.state_bucket", local.bucket_state_fullname]
}
after_hook "import_dynamodb_tf_lock_state" {
commands = ["init"]
execute = ["py", "${get_parent_terragrunt_dir()}/../bin/import-resource-hook", "aws_dynamodb_table.tf_lock_state", local.dynamodb_endpoint_fullname]
}
source = "${get_parent_terragrunt_dir()}/../infrastructure/bootstrap"
} then in my #!/usr/bin/env python3
import sys, argparse, subprocess
parser = argparse.ArgumentParser()
parser.add_argument("type")
parser.add_argument("name")
args = parser.parse_args()
tf_import = subprocess.run(["terraform", "import", args.type, args.name], text=True)
sys.exit(0) Please note I am colocating my .tf files in Since if the resource exists the import will fail, and the bad error code will make terragrunt bail out, I have to overwrite the bad error code with a As for why I wanted to do this? I have specific security requirements to implement on my state bucket, so while I found that it is fantastic that Terragrunt creates the state resources if they don't exist, and great that they allow me to specify tags, it is a slippery slope, I eventually want to manage the resource in terraform itself for full configurability. For example Terraform also recommends you turn on versioning, and many recommend you prevent_destroy on the state bucket. So this is best of both worlds and lets me do whatever I want after the "basic" bucket is created. |
Closing as original question has been answered thoroughly. The answers and examples here should be captured in a knowledge base though, so added |
Hello All,
I really like terragrunt =) and one of the reasons I like it is the plan-all and the dependency declaration in terraform.tfvars.
Unfortunately not all things can be created by Terraform and sometimes we need to import them... and the carry on with our terragrunt execution.
My flow would be like:
terragrunt apply-all
terragrunt import
terragrunt apply-all
What is the best way to accomplish this?
One way we could do it is by not using the -all command and build a shell that enters in each folder and does the plans & imports but that implies in loosing the -all functionality.
What would be the best way to do this with terragrunt?
The text was updated successfully, but these errors were encountered: