Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform adds aws_s3_bucket without import when bucket already exists on account #13587

Closed
trjstewart opened this issue Jun 3, 2020 · 8 comments · Fixed by #26011
Closed
Assignees
Labels
bug Addresses a defect in current functionality. service/s3 Issues and PRs that pertain to the s3 service.
Milestone

Comments

@trjstewart
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

0.12.26 - however this is not a version specific issue.

Affected Resource(s)

  • aws_s3_bucket

Terraform Configuration Files

resource "aws_s3_bucket" "my_s3_bucket" {
  bucket = "bucket-that-already-exists"
  acl    = "private"

  tags = {
    Name = "bucket-that-already-exists"
  }
}

Debug Output

https://gist.github.com/trjstewart/d6611512a1cfb467f78d8ac624776eef

Panic Output

n/a

Expected Behavior

When attempting to create a bucket that already exists we expect to receive a BucketAlreadyExists or a BucketAlreadyOwnedByYou error.

Actual Behavior

Due to documented behavior when making an API request to create a bucket that already exists in us-east-1 you receive a 200 response, rather than the expected 409. AWS documentation states that this is due to legacy compatibility. This causes Terraform to assume the resource created successfully and thus adds it to its state without having to explicitly import the existing resource.

Additionally to this, as a byproduct of the "successful" API request, the ACL on the existing bucket is reset, as per the above referenced documentation. This causes further unexpected behavior.

Steps to Reproduce

  1. pick a unique bucket name, for example test_bucket_1591160652
  2. create a blank terraform workspace; terraform workspace new testing-existing-bucket
  3. create an aws_s3_bucket resource using the unique name;
resource "aws_s3_bucket" "my_s3_bucket" {
  bucket = "test_bucket_1591160652"
  acl    = "private"

  tags = {
    Name = "test_bucket_1591160652"
  }
}
  1. terraform apply
  2. create a new blank workspace; terraform workspace new testing-existing-bucket-two
  3. create an aws_s3_bucket resource using the same name as in step 3;
resource "aws_s3_bucket" "my_s3_bucket" {
  bucket = "test_bucket_1591160652"
  acl    = "private"

  tags = {
    Name = "test_bucket_1591160652"
  }
}
  1. terraform apply

Important Factoids

There is nothing atypical about our environment in this case. I was able to reproduce this on a different account.

References

@ghost ghost added the service/s3 Issues and PRs that pertain to the s3 service. label Jun 3, 2020
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Jun 3, 2020
@rmancy-arkose
Copy link

Just to add my 2c, although this technically follows the letter of the API, finding yourself in a position where two distinct workspaces are now responsible for the same S3 bucket is surely unexpected.

@ewbankkit
Copy link
Contributor

@dynajoe
Copy link
Contributor

dynajoe commented Apr 2, 2021

I had a related issue, there was only one terraform being performed. Here's the series of steps:

  1. Terraform attempts to create the resource
  2. Terraform continues reporting creating...
  3. After 5 minutes terraform fails with BucketAlreadyOwnedByYou

Looking at the S3 bucket in AWS it was created about 2 seconds after step 1 above. Indicating to me that the creation was successful from the terraform provider. However, something must have happened for the logic to determine that a retry attempt was needed. Each subsequent retry reported a 409 conflict since the creation was indeed successful.

Any number of things could have caused an error in sdk client, e.g. closed connection while the resource was happily created.

@dynajoe
Copy link
Contributor

dynajoe commented Apr 14, 2021

We ended up getting around this by importing the bucket before apply.

terraform import aws_s3_bucket.main BUCKET_NAME_HERE || true
terraform apply

@ewbankkit ewbankkit added enhancement Requests to existing resources that expand the functionality or scope. and removed service/s3 Issues and PRs that pertain to the s3 service. labels Apr 21, 2021
@breathingdust breathingdust added service/s3 Issues and PRs that pertain to the s3 service. and removed needs-triage Waiting for first response or review from a maintainer. labels Sep 22, 2021
@markbaird
Copy link

I just ran into this as well. I accidently used the same bucket name in two Terraform workspaces, and Terraform didn't throw any errors. The two workspaces ended up overwriting each other's KMS key settings on the bucket, which caused lots of issues.

I don't understand why the previous ticket here was closed. The close reason is because S3 added strong consistency, but that was for objects, not buckets. S3 has always had strong consistency for buckets. The issue is that Terraform is somehow automatically "importing" an existing resource instead of throwing an error because it already exists.

@bassmanitram
Copy link

bassmanitram commented Feb 3, 2022

Me too - (twice actually). In the most recent occurrence two different workspaces - one from an apply in mid December 2021, one from a separate erroneous set of TF scripts applied yesterday - the latter imported ("adopted") the bucket created by the first.

TF 0.12.31, AWS provider 3.69.0 on the first state, 3.74.0 for the later workspace

@gdavison gdavison added bug Addresses a defect in current functionality. and removed enhancement Requests to existing resources that expand the functionality or scope. labels Jul 27, 2022
@gdavison gdavison self-assigned this Jul 27, 2022
@github-actions github-actions bot added this to the v4.24.0 milestone Jul 28, 2022
@github-actions
Copy link

github-actions bot commented Aug 3, 2022

This functionality has been released in v4.24.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

github-actions bot commented Sep 3, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 3, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/s3 Issues and PRs that pertain to the s3 service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants