-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS config role_arn not working #5592
Comments
This issue and #2420 are maybe duplicated. More discussion and workarounds are listed there. |
So I've encountered this error too, and as a work around you can configure AWS SDK or use aws sts to get the missing session token... which gets past this error. However once I do this work around, terraform crashes. reference for work around: https://blog.gruntwork.io/authenticating-to-aws-with-the-credentials-file-d16c0fbcbf9e terraform -v
Terraform v0.11.10
+ provider.aws v1.40.0 some output when terraform crashes: ...
2018-10-25T14:05:54.253-0400 [DEBUG] plugin.terraform-provider-aws_v1.40.0_x4: /opt/goenv/versions/1.11.1/src/net/rpc/server.go:481 +0x47e
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalConfigProvider, err: unexpected EOF
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalSequence, err: unexpected EOF
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalOpFilter, err: unexpected EOF
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalSequence, err: unexpected EOF
2018/10/25 14:05:54 [TRACE] [walkPlan] Exiting eval tree: provider.aws
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_iam_policy.lambda_policy"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_iam_role.iam_for_lambda"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.iam_policy_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_lambda_function.tagcheck_lambda"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_cloudwatch_event_rule.check_tags"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_iam_role_policy_attachment.lambda_policy"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.iam_role_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.lambda_function_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.cloudwatch_event_rule_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_lambda_permission.allow_cloudwatch"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_cloudwatch_event_target.tagcheck_lambda"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.lambda_arn"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.cloudwatch_event_target_id"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "provider.aws (close)"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "meta.count-boundary (count boundary fixup)"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "root"
2018-10-25T14:05:54.255-0400 [DEBUG] plugin: plugin process exited: path=/Users/paulgreene/Repos443/devops/vanguard/iac/terraform/plans/lambda-tagcheck/.terraform/plugins/darwin_amd64/terraform-provider-aws_v1.40.0_x4
2018/10/25 14:05:54 [DEBUG] plugin: waiting for all plugin processes to complete...
2018-10-25T14:05:54.256-0400 [WARN ] plugin: error closing client during Kill: err="connection is shut down"
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.
When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.
[1]: https://github.com/hashicorp/terraform/issues |
I have similar issues that are NOT MFA related. Running in an ECS on Fargate instance, I can get a config file like this working with the CLI (the
However, when I try this with terraform (even with the S3 backend), it doesn't even bother to assume the role. It just sees that it's running in EC2 and takes whatever it sees from the metadata interface.
I don't specify any assume role stuff in the provider configuration because we still run terraform by hand on our workstations. I imagine I could get it working with assume role config in the provider, but that's not what I'm going for. Here is the full documentation for the AWS cli config file: |
It looks like my issue may be solved for the S3 backend in 0.12 hashicorp/terraform#19190 |
My issue might have been fixed in version 1.41.0 of the terraform provider #5018 (comment).
And then I get a stack trace starting from what appears to be a reference to the EC2 metadata service. |
Hi folks! We expect this to be finally working as expected with the aws-sdk-go-base upgrade scheduled for the v2.32.0 release (#10379). I encourage you to upgrade and verify that next week. I'm going to add this issue to the release milestone so you'll see a reminder when it's out. Please open a new issue if your problem persists after that upgrade. Thanks! |
Closing as #10379 was merged previously and v2.32.0 has been released. 👍 |
This has been released in version 2.32.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Community Note
Terraform Version
Terraform v0.11.8, AWS provider v1.32.0
Affected Resource(s)
Not resource relevant - assuming roles and access.
Configuration Files
~/.aws/credentials
~/.aws/config
provider.tf
vpc.tf
Expected Behavior
Terraform should simply assume the
Cross_Account
in the test account when creating the resource.Actual Behavior
Steps to Reproduce
terraform init
terraform plan
References
There's so many different issues regarding this, and one of the last I found claimed this should have been fixed in v1.14 of the AWS provider. But I'm now running v1.32 and I still have problem with this.
Notes
This problem have persisted since well before the provider split and the way I have to solve it is to have individual keys for each account I'm using in the
~/.aws/credentials
file.But I'm now getting so many AWS accounts, that this is starting to become hard to handle (especially in regards to cycling keys - I rather cycle ONE set of keys every week, than twenty once every two months!).
I also have the pressure from the devs and the boss (to allow the devs to become DevOps, meaning they need more access to our AWS "stuff"). And adding all my users to every AWS account, forcing them to cycle their keys and passwords reasonably often etc, etc...
Because the devs won't have the access I have (they'll have bare minimum as well as not being able to delete or modify anything, only add new stuff for example - we haven't fully worked this out yet, mostly because there's no point, TF can't handle this anyway), so I don't want to hard code the role to use in any TF file - it must be done in the AWS config/creds files.
It's simply becoming such a burdon NOT to be able to do what the
aws
command have been able to do for years..I'm creating a new ticket, instead of adding noice to tickets that's either closed (without actually being fixed) or that was created before this was supposed to have been fixed (v1.14). It's also almost impossible to figure out which is the relevant one - I found over ten issues that give basically the same examples I'm giving above.
But also because the AssumeRoleTokenProviderNotSetError error isn't mentioned anywhere else, which might indicate that this is a new error.
The text was updated successfully, but these errors were encountered: