Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider configuration not passed to grandchild modules #2832

Closed
ryanuber opened this issue Jul 23, 2015 · 9 comments · Fixed by #6186
Closed

Provider configuration not passed to grandchild modules #2832

ryanuber opened this issue Jul 23, 2015 · 9 comments · Fixed by #6186

Comments

@ryanuber
Copy link
Member

When calling a module from another module, provider configuration does not seem to be propagated all the way down to the grandchild. This works as expected with a single level of modules, but when 2 or more layers are involved, the issue rears its head.

Terraform v0.6.2-dev (5a15c02)

main.tf

provider "aws" {
  access_key = "zip"
  secret_key = "zap"
  region     = "us-east-1"
}

module "foo" {
  source = "./foo"
}

foo/main.tf

module "bar" {
  source = "./bar"
}

foo/bar/main.tf

resource "aws_instance" "baz" {
  ami           = "ami-4c7a3924"
  count         = 1
  instance_type = "t2.micro"
}

Result:

$ terraform plan
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * module.foo.module.bar.provider.aws: "region": required field is not set
  * module.foo.module.bar.provider.aws: "access_key": required field is not set
  * module.foo.module.bar.provider.aws: "secret_key": required field is not set

The other weird thing is that the error is not always the same. I've seen 3 (!) different outputs for the same input. The others include an interactive prompt for the credentials:

$ terraform plan
provider.aws.access_key
  The access key for API operations. You can retrieve this
  from the 'Security & Credentials' section of the AWS console.

  Enter a value:

Or an invalid token error, which I would expect for this example:

$ terraform plan
Refreshing Terraform state prior to plan...

Error refreshing state: 1 error(s) occurred:

* 1 error(s) occurred:

* InvalidClientTokenId: The security token included in the request is invalid.
    status code: 403, request id: [11cdaeed-315d-11e5-9c83-7d4a35a9d7b0]
@jen20
Copy link
Contributor

jen20 commented Nov 16, 2015

@ryanuber, just tried to reproduce this against 5194eb4, with the following configuration:

main.tf

provider "aws" {
  region     = "us-east-1"
}

module "foo" {
  source = "./foo"
}

foo/main.tf

module "bar" {
  source = "./bar"
}

foo/bar/main.tf

resource "aws_vpc" "baz" {
    cidr_block = "10.0.0.0/16"
}

I'm getting mixed results, similarly to yours. Destroy in particular seems to want credentials each time. Definitely seems like there's an issue somewhere here, I'll investigate further.

@pikeas
Copy link
Contributor

pikeas commented Nov 17, 2015

There's something else at play here as well. I removed my "middle" module to work around this bug, and this is happening:

With Terraform 0.6.6:

$ TF_VAR_foo=bar ~/Downloads/terraform_0.6.6_darwin_amd64/terraform plan
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * aws_route.master: Provider doesn't support resource: aws_route

Above succeeds if I remove the aws_route resource. But since I need it, I switch to GH master, the latest Terraform v0.6.7-dev (24ee563):

$ TF_VAR_foo=bar terraform plan
Error configuring: 6 error(s) occurred:

* provider.aws: missing dependency: var.secret_key
* provider.aws: missing dependency: var.region
* provider.aws: missing dependency: var.access_key
* aws_security_group.ssh: missing dependency: var.foo
* module.vpc: missing dependency: var.foo
* module.kb_master: missing dependency: var.foo

It looks like no top-level variables are being passed down - output shows a failure in a provider, a module, and a plain resource.

@pshima
Copy link
Contributor

pshima commented Nov 18, 2015

For the provider level variables I have used a workaround to get this going for the time being.

I set the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION environment variables and then run terraform and the provider specific vars get passed through.

Not a fix but a good workaround to date.

@arohter
Copy link

arohter commented Feb 15, 2016

For the record, resource "aws_key_pair" "noop" { count = 0 } in the middle module foo/main.tfis my working, but ugly hack around this problem.

@rvangundy
Copy link

@arohter's solution works well when creating new resources, however if you later remove a module from your configuration and run terraform apply, the provider is lost and those resources can't be destroyed.

In my case the AWS environment variable fix doesn't work. I'm using S3 remote states using buckets in a different AWS account than the one I'm applying against (remote state uses the default AWS creds). When the provider is "lost" it falls back to the default (environment variable or the [default] block in ~/.aws/credentials) and tries to destroy resources in that account, giving me the following error:

 InvalidParameterException: Identifier is for account-id-B. Your accountId is account-id-A

Also this is intermittent... the provider is lost only some of the time. I'm not real familiar with how the Terraform state tree works, but this feels like a race condition. It's as if the provider is a global reference and is set and lost as the apply routine is traversing the tree.

phinze added a commit that referenced this issue Apr 14, 2016
The flattening process was not properly drawing dependencies between provider
nodes in modules and their parent provider nodes.

Fixes #2832
Fixes #4443
Fixes #4865
phinze added a commit that referenced this issue Apr 15, 2016
The flattening process was not properly drawing dependencies between provider
nodes in modules and their parent provider nodes.

Fixes #2832
Fixes #4443
Fixes #4865
jen20 pushed a commit that referenced this issue Apr 18, 2016
The flattening process was not properly drawing dependencies between provider
nodes in modules and their parent provider nodes.

Fixes #2832
Fixes #4443
Fixes #4865
@evanstachowiak
Copy link

I'm still experiencing this issue on terraform 0.8.6. Is anyone else having the same problem?

@jantman
Copy link

jantman commented Sep 11, 2018

I seem to still be having this issue on 0.11.8

@magnetik
Copy link

For people that seems to face this : I was using both named profiles and environment variables.

The latter are used before the profiles causing the confusion.

@ghost
Copy link

ghost commented Oct 11, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Oct 11, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants