Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

destroying aws_iam_policy_attachment to a managed policy deletes everyone's attachments #5483

Closed
slowbluecamera opened this issue Mar 7, 2016 · 23 comments

Comments

@slowbluecamera
Copy link

If you use the "aws_iam_policy_attachment" resource to attach a role to a managed_policy, when you destroy the configuration, it will remove attachments made by other configurations, or even manually setup attachments.

In the example configurations below, configuration "plan_one.tf" sets up an attachment to the "AWSLambdaBasicExecutionRole" managed policy. The configuration "plan_two.tf" also sets up a similar attachment. If you apply both configurations, and then delete one, you'll find that both attachments have bene removed.

Also, if you have set up role attachments to the managed policy by other scripts, or manually, then you will find that those attachments have been removed (which is how we discovered it! :-( ).

Have reproduced this in terraform-0.6.12 on OSX.

(This is my first issue reported to the terraform project. I've reviewed submitting guidelines and tried to be complete, but I'd like this issue to be as useful as possible. So if there is any additional information needed or changes in style that would be helpful, please don't hesitate to let me know. Thanks!)

Workarounds:

  • don't use managed policies in terraform configurations
  • (haven't tried this) use IAM to permissions to prevent terraform from deleting attachments it hasn't set up, and then make sure your terraform configurations don't use the same managed policies.

Steps to reproduce:

  one/
     plan_one.tf
     variables.tf
  two/
     plan_two.tf
     variables.tf
  1. create two configurations as above directories (using the file contents below).
  2. set environment variables as appropriate
    export TF_VAR_access_key=AWS ACCESS KEY
    export TF_VAR_secret_key=AWS SECRET KEY
  3. "cd one; terraform apply" to create config one
  4. "cd two; terraform apply" to create config two
  5. "cd one; terraform destroy" to destroy config one
  6. "cd two; terraform plan" to see if config two is still complete

Notice that the policy attachment in plan_two.tf is no longer there.

plan_one.tf

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region = "${var.region}"
}

# Configure security settings for Lambda
resource "aws_iam_role" "lambda_exec_role" {
  name = "PlanOneLambdaExecRole"
  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

# Attach role to Managed Policy
resource "aws_iam_policy_attachment" "test_attach" {
  name = "PlanOneLambdaExecPolicy"
  roles = ["${aws_iam_role.lambda_exec_role.id}"]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

two/plan_two.tf

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region = "${var.region}"
}

# Configure security settings for Lambda
resource "aws_iam_role" "lambda_exec_role" {
  name = "PlanTwoLambdaExecRole"
  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

# Attach role to Managed Policy
resource "aws_iam_policy_attachment" "test_attach" {
  name = "PlanTwoLambdaExecPolicy"
  roles = ["${aws_iam_role.lambda_exec_role.id}"]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

variables.tf (identical in both configurations)

# to set aws key env variables
variable "access_key" {}
variable "secret_key" {}

variable "region" {
  default = "us-east-1"
}
@elisehuard
Copy link

Same thing here. version 0.6.12. We noticed that specifically AmazonS3FullAccess kept on disappearing from other roles, which gave us a little headache in production! Only today did I notice that the terraform changelog operated on roles that were outside of the stack.

@jeffastorey
Copy link

I think this also happens when just running an update (terraform apply) on IAM attachments that you already created with terraform. So it's pretty destructive whenever terraform tries to do any syncing of its knowledge of IAM attachments vs what actually exists in AWS.

I think what’s happening is that when terraform tries to reconcile its state file with the state of the world it looks for anyone with the attached policies, and if the attachment is not in its state file (because it was created through some other method), terraform assumes it should be detached. The offending code is around https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/resource_aws_iam_policy_attachment.go#L106.

@bigkraig
Copy link
Contributor

+1, we locked ourselves out of our account with this

@slowbluecamera
Copy link
Author

Further looking into this, and this issue is similar, if not a duplicate of, issue #2100.

In any event, I've verified that @graycoder's fix in PR #4016 resolves the role-based break case presented and the approach makes sense. Good work!

@iceycake
Copy link
Contributor

+1, we locked ourselves out of all accounts with this bug again.

@atward
Copy link
Contributor

atward commented Apr 15, 2016

Seconded.

The problem here is that the relationship of the attachment is 1 policy : many roles/users/groups. Terraform is viewing the attachment from the perspective of the policy. Perhaps an alternative resource/mechanism to attach many policies to 1 role/user/group would suit.

@pgray
Copy link
Contributor

pgray commented Apr 15, 2016

@atward #4016 solves this. I wish someone would review/merge it.

@gugahoi
Copy link

gugahoi commented Apr 18, 2016

+1 - it becomes almost impossible to manage policies within one account with the current implementation. Policies get removed from previous terraform apply's and only respect the last one run, even if they are still required previously.

@slowbluecamera
Copy link
Author

Because #6858 was mentioned as possibly superseding #4016, I retested this against current master (which includes #6858) and confirm that the test cases above still fail.

Would be really great if #4016 could be reviewed and merged if acceptable. It has had several eyes across it in the last six months. Without it we have to duplicate managed policies rather than use official polices and risk having policy attachments accidentally removed.

@johnrengelman
Copy link
Contributor

@slowbluecamera in your test case did you switch your resources to be the new ones introduced in #6858 where you map a single attachment to a single user/role/group (different resources for each one)?
That PR doesn't touch the existing aws_iam_policy_attachment resource.

@slowbluecamera
Copy link
Author

@johnrengelman : My apologies, I did simply re-test the above without understanding the change fully.
Thanks for pointing this out. I have re-tested using the new resource and this gives us a useful path forward. \o/

Specifically replacing the existing policy attachment:

# Attach role to Managed Policy
resource "aws_iam_policy_attachment" "test_attach" {
  name = "PlanOneLambdaExecPolicy"
  roles = ["${aws_iam_role.lambda_exec_role.id}"]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

With the new resource:

# Attach role to Managed Policy
resource "aws_iam_role_policy_attachment" "test_attach" {
  role = "${aws_iam_role.lambda_exec_role.id}"
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

Works as desired. Good stuff! Look forward to using this in forthcoming version.

@grahamsk
Copy link

This has been affecting me as well. Is there any update on the fix to this?

@alexsomesan
Copy link
Member

I've ran into this myself.

Specifically, when trying to attach the same managed policy to a second role in the same account terraform plan shows me this:

~ module.bastion-server.aws_iam_policy_attachment.bastion_server_node_policy
    roles.162511441: "etcd-cluster-node-role" => ""
    roles.609976136: "" => "bastion-server-node-role"

Here etcd-cluster-node-role already exists and has the same managed policy attached, but is not part of the current Terraform state. Still, terraform plans to remove it's attachment.

I currently sifting through the Terraform code to figure out what's causing this. Will update here when I find something.

@johnrengelman
Copy link
Contributor

@grahamsk @alexsomesan - have you tried using the new resources that are specific to role, group, and user instead of the catch all that can assign all those in one (as per my comment above)?

@jaygorrell
Copy link

@johnrengelman I have, and that works. The root issue though is that those are from perspective of the respective resource. Similarly, aws_iam_policy_attachment is from the perspective of the Policy.

If you're dealing with a user and set the policy list (empty), you remove that policy from the user. If you're dealing with a policy and set the attachment list (empty), you remove anything that was attached to the policy.

The difficulty here is being able to define the resource in either direction. TF may need to make sure that adding this resource matches the list already in place before successfully adding it.

@johnrengelman
Copy link
Contributor

The point of the new resources is to manage the resources from the pov of the attachment itself just as the aws API models it: an attachment of a policy to a single resource. See http://docs.aws.amazon.com/IAM/latest/APIReference/API_AttachGroupPolicy.html
The original resource tries to model multiple attachments which requires multiple API calls and ends up in this scenario since it tries to model the whole collection of attachments.

@iroller
Copy link
Contributor

iroller commented Sep 2, 2016

We're experiencing the same issue mentioned by @alexsomesan

@iroller
Copy link
Contributor

iroller commented Sep 2, 2016

@johnrengelman I've tried switching to aws_iam_role_policy_attachment and it seem to have fixed the issue here.

@c4urself
Copy link

c4urself commented Nov 2, 2016

seems related to #6045, the aws_iam_role_policy_attachment is not working for us either -- changed everything to use that and still get the same behaviour (0.7.4). Going to manually attach for now.

@johnrengelman
Copy link
Contributor

@c4urself can you post a copy of your TF files that is using the aws_iam_role_policy_attachment resource? Everyone above has stated that switching to this resource versus the original aws_iam_policy_attachment has fixed the issue for them.

@robbertvanwaveren
Copy link

I can confirm that switching to role_policy fixes the problem for me as well.

Please note that after switching all your plans to this, you do need to run all of them once. Following by running all of them once more. As the first time you switch each plan will still trigger a global delete.

@kc-dot-io
Copy link

I can confirm the switching to role_policy_attachment, also works for me on 0.9.7

@ghost
Copy link

ghost commented Apr 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests