Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform attempts to modify again and again an s3 bucket defined with a policy including a CanonicalUser #6642

Closed
giladwolff opened this issue May 12, 2016 · 11 comments

Comments

@giladwolff
Copy link

giladwolff commented May 12, 2016

Terraform Version

0.6.15

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_s3_bucket

Terraform Configuration Files

This is the policy from the s3 bucket I'm setting up:

policy = <<EOF
<<<<{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": [
            "s3:GetObject"
          ],
          "Effect": "Allow",
          "Resource": "arn:aws:s3:::${var.console_app_bucket_name}",
          "Principal":{"CanonicalUser":"${var.cloudfront_origin_s3_canonical_user_id}"}
        }
      ]
    }
<<<<EOF

Debug Output

policy: "{\"Statement\":[{\"Action\":\"s3:GetObject\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <ORIGIN_ID>\"},\"Resource\":\"arn:aws:s3:::ccs-mastodon-consoleapp-bucket-dev/*\",\"Sid\":\"\"}],\"Version\":\"2012-10-17\"}" =>
"{\"Statement\":[{\"Action\":\"s3:GetObject\",\"Effect\":\"Allow\",\"Principal\":{\"CanonicalUser":\"<CANONICAL_USER_GUID_A_VERY_LONG_NUMBER\"},\"Resource\":\"arn:aws:s3:::<SOME_BUCKET>/*\",\"Sid\":\"\"}],\"Version\":\"2012-10-17\"}"

Expected Behavior

Terraform should not attempt to modify the s3 bucket as the AWS principal and the CanonicalUser principal are the same principal referred to by different names.

Actual Behavior

Terrafrom is trying to modify the s3 bucket policy.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan to see that terraform is going to modify the s3 bucket policy.

Important Factoids

The CloudFront origin access id was created by terraform as well.

Workaround

I now "resolve" the name myself and use 'format' to generate the principal arn:

output "cloudfront_origin_s3_canonical_user_id" {
    value = "${format("arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity %s",aws_cloudfront_origin_access_identity.cloudfront_origin_access_identity.id)}"
}
@PurrBiscuit
Copy link
Contributor

we're having a similar issue since upgrading to terraform 0.6.15, although not quite the same. Our S3 bucket policy keeps showing up as changing in the terraform plan even though nothing is changing but the order of the policy statement in the plan:

~ module.example_bucket.aws_s3_bucket.example
    policy: "{\"Statement\":[{\"Action\":[\"s3:ListBucket\",\"s3:PutObject\",\"s3:AbortMultipartUpload\",\"s3:PutObjectAcl\",\"s3:GetObject\",\"s3:DeleteObject\",\"s3:GetObjectAcl\",\"s3:ListMultipartUploadParts\"],\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::111111111111:role/build-agent\"},\"Resource\":[\"arn:aws:s3:::example.company.zone/*\",\"arn:aws:s3:::example.company.zone\"],\"Sid\":\"1\"}],\"Version\":\"2012-10-17\"}" => "{\"Statement\":[{\"Action\":[\"s3:AbortMultipartUpload\",\"s3:GetObjectAcl\",\"s3:ListBucket\",\"s3:DeleteObject\",\"s3:PutObjectAcl\",\"s3:GetObject\",\"s3:ListMultipartUploadParts\",\"s3:PutObject\"],\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::111111111111:role/build-agent\"},\"Resource\":[\"arn:aws:s3:::example.company.zone\",\"arn:aws:s3:::example.company.zone/*\"],\"Sid\":\"1\"}],\"Version\":\"2012-10-17\"}"

@giladwolff
Copy link
Author

This is I believe a different issue with the fact that aws always generates a Sid for policy, and if you don't have one in your aws_s3_bucket then terraform will think that the policy changed. To solve, just add:

'Sid: "" '

to your bucket policy.

@PurrBiscuit
Copy link
Contributor

@giladwolff I double checked and it looks like our policy already has the SID specified. Here's what our config for that policy looks like:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111111111111:role/build-agent"
            },
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:GetObjectAcl",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::${bucket_name}",
                "arn:aws:s3:::${bucket_name}/*"
            ]
        }
    ]
}

and here our the resources that use that policy.

resource "aws_s3_bucket" "example" {
    bucket = "${lookup(var.bucket_name, var.env)}"
    acl = "public-read"

    policy = "${template_file.life_policy.rendered}"

    website {
        index_document = "index.html"
        error_document = "error.html"
    }

    tags {
        Name = "example"
        Env = "${var.env}"
    }
}

resource "template_file" "example_policy" {
    template = "${file("s3/example/policies/example-s3-policy.json")}"

    vars {
        bucket_name = "${lookup(var.bucket_name, var.env)}"
    }

    lifecycle {
        create_before_destroy = true
    }
}

@giladwolff
Copy link
Author

My bad. I somehow missed the Sid = "1" in the 'to' policy. The only thing I can think of is the make the order in the policy statement the same as what aws renders it and see if it works.

@PurrBiscuit
Copy link
Contributor

I've done that too a few times, trying to reorder to what terraform is trying to change it to, and it will work for a few days before the order gets changed again. I gave up on trying to keep up with the ordering of the policies after a few attempts.

@jaygorrell
Copy link

+1 to this. I've also reordered ours and it changed to something else a few days later.

@vancluever
Copy link
Contributor

Everyone, I have found this as well, and from what I've seen the main issue is that AWS is converting the CanonicalUser principal to an AWS one, with the identity's access ARN versus the canonical user ID. I've put in #6955 to address this - via this you can have access to a templated ARN for use as the AWS principal versus the CanonicalUser one.

In the meantime, you can also generate this ARN manually with the appropriate ARN base and the id attribute.

@evanstachowiak
Copy link

I'm also having this issue, it seems to be occurring with any aws_s3_bucket_policy type no matter what the policy is.

@syed-awais-ali
Copy link

Is there any update on this. I am giving the same exact policy after importing the bucket into state file, but when I run the plan it shows two things in plan

  • It shows that it will alter the policy of the bucket

  • And it also marks it for destruction, although I try to match the same exact configuration and attribute values after I import the buckets in state file

@shide1989
Copy link

I have this issue too for one of our clients, any help or workaround available for this ?

here my resource declaration :


resource "aws_s3_bucket" "bucket_admin" {
  region = "eu-west-2"
  bucket = "${var.s3_bucket_name}"
  acl    = "public-read"
  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Id": "Policy1544625976681",
  "Statement": [
    {
      "Sid": "Stmt1544625974653",
      "Effect": "Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource": "arn:aws:s3:::${var.s3_bucket_name}/*"
    }
  ]
}
POLICY

  # To skip confirmation when destroying
  #  force_destroy = true

  tags = {
    Name        = "React bucket"
    Environment = "${var.environment}"
  }

  website {
    # For native apps (React/Vuejs) that use their own router
    error_document = "index.html"
    index_document = "index.html"
  }
}

@ghost
Copy link

ghost commented Aug 31, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Aug 31, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants