-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using variables in terraform backend config block #13022
Comments
I am trying to do something like this; getting the same "configuration cannot contain interpolations" error. While it seems like this is being worked on, I wanted to also ask if this is the right way for me to use access and secret keys? Does it have to be placed here so that I don't have to check the access and secret keys to github
|
I have the same problem i.e. would love to see interpolations in the backend config. Now that we have "environments" in terraform, I was hoping to have a single config.tf with the backend configuration and use environments for my states. The problem is that I want to assume an AWS role based on the environment I'm deploying to. I can do this in "provider" blocks as the provider block allows interpolations so I can assume the relevant role for the environment I'm deploying to, however if I also rely on the role being set for the backend state management (e.g. when running |
I managed to get it working by using AWS profiles instead of the access keys directly. What I did though was not optimal; but in my build steps, I ran a bash script that called AWS configure that ultimately set the default access key and secret. |
We want to archive something similar than @antonosmond. At the moment we use multiple environments prod/stage and want to upload tfstate files to S3.
In this case with above backend definition leads us to this Error:
Now if we try to hardcode it like this:
we get the following notification:
Is there a workaround for this problem at the moment, documentation for backend configuration does not cover working with environments. Solved seems my local test env was still running on terraform 0.9.1, after updating to latest version 0.9.2 it was working for me.
|
Hi,
This is the message when I try to run terraform init
Is this expected behaviour on v0.9.3? Are there any workarounds for this? |
In case it's helpful to anyone, the way I get around this is as follows:
All of the relevant variables are exported at the deployment pipeline level for me, so it's easy to init with the correct information for each environment.
I don't find this ideal, but at least I can easily switch between environments and create new environments without having to edit any terraform. |
@gsirvas @umeat To archive multiple environment with the same backend configuration it is not necessary to use variables/interpolation .It is expected that is not possible to use variables/interpolation in backend configuration see comment from @christofferh. Just write it like this:
Terraform will split and store environment state files in a path like this: |
@NickMetz it's trying to do multiple environments with multiple backend buckets, not a single backend. You can't specify a different backend bucket in terraform environments. In my example you could still use terraform environments to prefix the state file object name, but you get to specify different buckets for the backend. Perhaps it's better to just give accross account access to the user / role which is being used to deploy your terraform. Deploying your terraform to a different account, but using the same backend bucket. Though it's fairly reasonable to want to store the state of an environment in the same account that it's deployed to. |
@umeat in that case you are right, it is not possible at the moment to use different backends for each environment. It would be more comfortable to have a backend mapping for all environments what is not implemented yet. |
Perhaps a middle ground would be to not error out on interpolation when the variable was declared in the environment as |
I also would like to be able to use interpolation in my backend config, using v 0.9.4, confirming this frustrating point still exists. In my use case i need to reuse the same piece of code (without writing a new repo each time i'd want to consume it as a module) to maintain multiple separate statefiles. |
Same thing for me. I am using Terraform v0.9.4.
Here is the error Output of
|
I needs dis! For many features being developed, we want our devs to spin up their own infrastructure that will persist only for the length of time their feature branch exists... to me, the best way to do that would be to use the name of the branch to create the key for the path used to store the tfstate (we're using amazon infrastructure, so in our case, the s3 bucket like the examples above). I've knocked up a bash script which will update TF_VAR_git_branch every time a new command is run from an interactive bash session. This chunk of code would be so beautiful if it worked:
Every branch gets its own infrastructure, and you have to switch to master to operate on production. Switching which infrastructure you're operating against could be as easy as checking out a different git branch. Ideally it'd be set up so everything named "project-name-master" would have different permissions that prevented any old dev from applying to it. It would be an infrastructure-as-code dream to get this working. |
@NickMetz said...
Your top-level structure looks nice and tidy for traditional dev/staging/prod ... sure:
But what if you want to stand up a whole environment for project-specific features being developed in parallel? You'll have a top-level key for each story branch, regardless of which project that story branch is in...
It makes for a mess at the top-level of the directory structure, and inconsistency in what you find inside each story-level dir structure. Full control over the paths is ideal, and we can only get that through interpolation. Ideally I'd want my structure to look like "project/${var.git_branch}/terraform.tfstate", yielding:
Now, everything you find for a given project is under its directory... so long as the env is hard-coded at the beginning of the remote tfstate path, you lose this flexibility. Microservices are better versioned and managed discretely per component, rather than dumped into common prod/staging/dev categories which might be less applicable on a per-microservice basis, each one might have a different workflow with different numbers of staging phases leading to production release. In the example above project1 might not even have staging... and project2 might have unit/regression/load-testing/staging phases leading to production release. |
you'd think at the very least you'd be allowed to use |
In Terraform 0.10 there will be a new setting |
I know a +1 does not add much but yeah, need this too to have 2 different buckets, since we have 2 AWS accounts. |
I was hoping to do the same thing as described in #13603 but the lack of interpolation in the terraform block prevents this. |
+1 |
Sure, but the |
Yes, I won't track this file in git. May you have any suggestion for this use case? |
I came up with a commit that let's me use all functions and the diff --git a/internal/command/meta_backend.go b/internal/command/meta_backend.go
index 908e87b08..60acd5ce0 100644
--- a/internal/command/meta_backend.go
+++ b/internal/command/meta_backend.go
@@ -19,6 +19,7 @@ import (
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hcldec"
+ "github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/cloud"
"github.com/hashicorp/terraform/internal/command/arguments"
@@ -1329,7 +1330,16 @@ func (m *Meta) backendConfigNeedsMigration(c *configs.Backend, s *legacy.Backend
schema := b.ConfigSchema()
decSpec := schema.NoneRequired().DecoderSpec()
- givenVal, diags := hcldec.Decode(c.Config, decSpec, nil)
+
+ terraform.DefaultEvaluator.Meta.OriginalWorkingDir = m.WorkingDir.OriginalWorkingDir()
+ scope := terraform.DefaultEvaluationStateData.Evaluator.Scope(terraform.DefaultEvaluationStateData, nil, nil)
+ ctx, _ := scope.EvalContext([]*addrs.Reference{
+ {Subject: addrs.PathAttr{Name: "cwd"}},
+ {Subject: addrs.PathAttr{Name: "root"}},
+ {Subject: addrs.PathAttr{Name: "module"}},
+ })
+
+ givenVal, diags := hcldec.Decode(c.Config, decSpec, ctx)
if diags.HasErrors() {
log.Printf("[TRACE] backendConfigNeedsMigration: failed to decode given config; migration codepath must handle problem: %s", diags.Error())
return true // let the migration codepath deal with these errors
@@ -1366,7 +1376,16 @@ func (m *Meta) backendInitFromConfig(c *configs.Backend) (backend.Backend, cty.V
schema := b.ConfigSchema()
decSpec := schema.NoneRequired().DecoderSpec()
- configVal, hclDiags := hcldec.Decode(c.Config, decSpec, nil)
+
+ terraform.DefaultEvaluator.Meta.OriginalWorkingDir = m.WorkingDir.OriginalWorkingDir()
+ scope := terraform.DefaultEvaluationStateData.Evaluator.Scope(terraform.DefaultEvaluationStateData, nil, nil)
+ ctx, _ := scope.EvalContext([]*addrs.Reference{
+ {Subject: addrs.PathAttr{Name: "cwd"}},
+ {Subject: addrs.PathAttr{Name: "root"}},
+ {Subject: addrs.PathAttr{Name: "module"}},
+ })
+
+ configVal, hclDiags := hcldec.Decode(c.Config, decSpec, ctx)
diags = diags.Append(hclDiags)
if hclDiags.HasErrors() {
return nil, cty.NilVal, diags
diff --git a/internal/terraform/eval_default.go b/internal/terraform/eval_default.go
new file mode 100644
index 000000000..a315234a0
--- /dev/null
+++ b/internal/terraform/eval_default.go
@@ -0,0 +1,14 @@
+package terraform
+
+import "github.com/hashicorp/terraform/internal/configs"
+
+var (
+ DefaultEvaluator = new(Evaluator)
+ DefaultEvaluationStateData = new(evaluationStateData)
+)
+
+func init() {
+ DefaultEvaluator.Meta = &ContextMeta{Env: "default"}
+ DefaultEvaluator.Config = &configs.Config{Module: &configs.Module{SourceDir: "."}}
+ DefaultEvaluationStateData.Evaluator = DefaultEvaluator
+} |
Is there any plan to implement this soon? I want to use variables that are set to be used as my access_key, to help with secrets concealment. |
@thalesfsp which means? |
@dimisjim It means that Terraform is now the MySQL to MariaDB and it appears OpenTofu is MariaDB. |
@gothrek22 This issue is about the backend configuration block, not provider configurations. |
@jan-di you are completely right, I've answered in the wrong issue. Just deleted |
I don't think this will ever be fixed cuz they want you to pay for terraform cloud |
Work in the fork for those interested: opentofu/opentofu#1042 |
An alternative is to use partial backends https://developer.hashicorp.com/terraform/language/settings/backends/configuration#partial-configuration tl;dr;1. Omit parts of your backend config that you want to be variables
In above, I omitted the 2. Create a file called config.s3.tfbackendYou'd want to treat this file similar to how you treat your
3. Run
|
@Theaxiom awesome explanation. Thank you for translating. @dimisjim Also... this issue is open since 2017, lol. |
This is now supported in OpenTofu: https://github.com/opentofu/opentofu/releases/tag/v1.8.0-alpha1 |
what are the possible way to use variable in terraform backend, I'm asking this because I'm storing my code in GitHub repo, I cannot expose credentials there. can anyone help me with this ? |
@akashbogamibm why can't? GitHub got secret variables for exactly such purpose. You don't have to "expose" anything. |
Since Mar 23, 2017... and counting |
@thalesfsp, Is |
@ketzacoatl please ask @glenjamin, I'm just mind-blowed how long this "issue" is open... |
This feature would be a good enhancement for backends, specially for managing multiple environments, in this way you won´t need to hard-code the values for dev, qa, stg and prod ....... state files. |
still not supported ? 👀 |
Terraform Version
v0.9.0
Affected Resource(s)
terraform backend config
Terraform Configuration Files
Expected Behavior
Variables are used to configure the backend
Actual Behavior
Steps to Reproduce
terraform apply
Important Factoids
I wanted to extract these to variables because i'm using the same values in a few places, including in the provider config where they work fine.
The text was updated successfully, but these errors were encountered: