Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS config role_arn not working #5592

Closed
FransUrbo opened this issue Aug 17, 2018 · 9 comments
Closed

AWS config role_arn not working #5592

FransUrbo opened this issue Aug 17, 2018 · 9 comments
Labels
bug Addresses a defect in current functionality. provider Pertains to the provider itself, rather than any interaction with AWS.
Milestone

Comments

@FransUrbo
Copy link

FransUrbo commented Aug 17, 2018

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.11.8, AWS provider v1.32.0

Affected Resource(s)

Not resource relevant - assuming roles and access.

Configuration Files

~/.aws/credentials

[root]
aws_access_key_id = MYKEY
aws_secret_access_key = SECRETKEY

~/.aws/config

[default]
region = eu-west-1

[profile test]
region = eu-west-1
source_profile = root
role_arn = arn:aws:iam::TEST_ACCOUNT_ID:role/Cross_Account
mfa_serial = arn:aws:iam::ROOT_ACCOUNT_ID:mfa/turbo

provider.tf

provider "aws" {
  region  = "eu-west-1"
  profile = "test"
}

vpc.tf

resource "aws_vpc" "my_test" {
  cidr_block                  = "192.168.0.0/16"

  enable_dns_support          = "true"
  enable_dns_hostnames        = "true"
}

Expected Behavior

Terraform should simply assume the Cross_Account in the test account when creating the resource.

Actual Behavior

Error: Error running plan: 1 error(s) occurred:

* provider.aws: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.

Steps to Reproduce

  1. Setup the corresponding files above
  2. terraform init
  3. terraform plan

References

There's so many different issues regarding this, and one of the last I found claimed this should have been fixed in v1.14 of the AWS provider. But I'm now running v1.32 and I still have problem with this.

Notes

This problem have persisted since well before the provider split and the way I have to solve it is to have individual keys for each account I'm using in the ~/.aws/credentials file.

But I'm now getting so many AWS accounts, that this is starting to become hard to handle (especially in regards to cycling keys - I rather cycle ONE set of keys every week, than twenty once every two months!).

I also have the pressure from the devs and the boss (to allow the devs to become DevOps, meaning they need more access to our AWS "stuff"). And adding all my users to every AWS account, forcing them to cycle their keys and passwords reasonably often etc, etc...

Because the devs won't have the access I have (they'll have bare minimum as well as not being able to delete or modify anything, only add new stuff for example - we haven't fully worked this out yet, mostly because there's no point, TF can't handle this anyway), so I don't want to hard code the role to use in any TF file - it must be done in the AWS config/creds files.

It's simply becoming such a burdon NOT to be able to do what the aws command have been able to do for years..

I'm creating a new ticket, instead of adding noice to tickets that's either closed (without actually being fixed) or that was created before this was supposed to have been fixed (v1.14). It's also almost impossible to figure out which is the relevant one - I found over ten issues that give basically the same examples I'm giving above.

But also because the AssumeRoleTokenProviderNotSetError error isn't mentioned anywhere else, which might indicate that this is a new error.

@FransUrbo FransUrbo changed the title AWS assume role not working AWS config role_arn not working Aug 17, 2018
@bflad bflad added bug Addresses a defect in current functionality. provider Pertains to the provider itself, rather than any interaction with AWS. labels Aug 17, 2018
@chroju
Copy link
Contributor

chroju commented Oct 12, 2018

This issue and #2420 are maybe duplicated. More discussion and workarounds are listed there.

@prdgreene
Copy link

prdgreene commented Oct 25, 2018

So I've encountered this error too, and as a work around you can configure AWS SDK or use aws sts to get the missing session token... which gets past this error. However once I do this work around, terraform crashes.

reference for work around: https://blog.gruntwork.io/authenticating-to-aws-with-the-credentials-file-d16c0fbcbf9e

terraform -v
Terraform v0.11.10
+ provider.aws v1.40.0

some output when terraform crashes:

...
2018-10-25T14:05:54.253-0400 [DEBUG] plugin.terraform-provider-aws_v1.40.0_x4: 	/opt/goenv/versions/1.11.1/src/net/rpc/server.go:481 +0x47e
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalConfigProvider, err: unexpected EOF
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalSequence, err: unexpected EOF
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalOpFilter, err: unexpected EOF
2018/10/25 14:05:54 [ERROR] root: eval: *terraform.EvalSequence, err: unexpected EOF
2018/10/25 14:05:54 [TRACE] [walkPlan] Exiting eval tree: provider.aws
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_iam_policy.lambda_policy"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_iam_role.iam_for_lambda"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.iam_policy_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_lambda_function.tagcheck_lambda"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_cloudwatch_event_rule.check_tags"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_iam_role_policy_attachment.lambda_policy"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.iam_role_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.lambda_function_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.cloudwatch_event_rule_name"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_lambda_permission.allow_cloudwatch"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.aws_cloudwatch_event_target.tagcheck_lambda"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.lambda_arn"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "module.lambda_tagcheck.output.cloudwatch_event_target_id"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "provider.aws (close)"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "meta.count-boundary (count boundary fixup)"
2018/10/25 14:05:54 [TRACE] dag/walk: upstream errored, not walking "root"
2018-10-25T14:05:54.255-0400 [DEBUG] plugin: plugin process exited: path=/Users/paulgreene/Repos443/devops/vanguard/iac/terraform/plans/lambda-tagcheck/.terraform/plugins/darwin_amd64/terraform-provider-aws_v1.40.0_x4
2018/10/25 14:05:54 [DEBUG] plugin: waiting for all plugin processes to complete...
2018-10-25T14:05:54.256-0400 [WARN ] plugin: error closing client during Kill: err="connection is shut down"



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

@iancward
Copy link
Contributor

iancward commented Nov 19, 2018

I have similar issues that are NOT MFA related. Running in an ECS on Fargate instance, I can get a config file like this working with the CLI (the credetials file is empty. the credential_source directive tells it to look at the EC2 metadata service for credentials to use during the assume role action):

[default]
region = us-west-2
role_arn = <the role ARN to assume>
external_id = <the external ID to use during assume>
credential_source = Ec2InstanceMetadata
role_session_name = <the name to give the assumed role session>

However, when I try this with terraform (even with the S3 backend), it doesn't even bother to assume the role. It just sees that it's running in EC2 and takes whatever it sees from the metadata interface.

Initializing the backend...
2018/11/19 20:31:14 [INFO] Building AWS region structure
2018/11/19 20:31:14 [INFO] Building AWS auth structure
2018/11/19 20:31:14 [INFO] Setting AWS metadata API timeout to 100ms
2018/11/19 20:31:14 [INFO] AWS EC2 instance detected via default metadata API endpoint, EC2RoleProvider added to the auth chain
2018/11/19 20:31:15 [INFO] AWS Auth provider used: "EC2RoleProvider"

I don't specify any assume role stuff in the provider configuration because we still run terraform by hand on our workstations. I imagine I could get it working with assume role config in the provider, but that's not what I'm going for.

Here is the full documentation for the AWS cli config file:
https://docs.aws.amazon.com/cli/latest/topic/config-vars.html

@iancward
Copy link
Contributor

It looks like my issue may be solved for the S3 backend in 0.12 hashicorp/terraform#19190

@iancward
Copy link
Contributor

My issue might have been fixed in version 1.41.0 of the terraform provider #5018 (comment).
But I get a crash (not the same as what @prdgreene got, though) when I try to use an intermediate profile to get around this:

Initializing the backend...
2018/11/19 22:01:14 [INFO] Building AWS region structure
2018/11/19 22:01:14 [INFO] Building AWS auth structure
2018/11/19 22:01:14 [INFO] Setting AWS metadata API timeout to 100ms
2018/11/19 22:01:14 [DEBUG] plugin: waiting for all plugin processes to complete...
2018/11/19 22:01:14 ERROR: failed to create session with AWS_SDK_LOAD_CONFIG enabled. Use session.NewSession to handle errors occurring during session creation. Error: SharedConfigAssumeRoleError: failed to load assume role for arn:aws:iam:<foo>:role<foo>, source profile has no shared credentials


!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x86ea93]

And then I get a stack trace starting from what appears to be a reference to the EC2 metadata service.

@aeschright
Copy link
Contributor

Hi folks! We expect this to be finally working as expected with the aws-sdk-go-base upgrade scheduled for the v2.32.0 release (#10379). I encourage you to upgrade and verify that next week. I'm going to add this issue to the release milestone so you'll see a reminder when it's out.

Please open a new issue if your problem persists after that upgrade. Thanks!

@bflad
Copy link
Contributor

bflad commented Oct 10, 2019

Closing as #10379 was merged previously and v2.32.0 has been released. 👍

@bflad bflad closed this as completed Oct 10, 2019
@ghost
Copy link

ghost commented Oct 10, 2019

This has been released in version 2.32.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

@ghost
Copy link

ghost commented Nov 10, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 10, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. provider Pertains to the provider itself, rather than any interaction with AWS.
Projects
None yet
Development

No branches or pull requests

6 participants