-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
r/aws_s3_bucket_object: Use own hash to track object changes (move away from the etag md5 value) #6668
Comments
What is the current best workaround? I'm still using content_base64 as a workaround, which has the drawback of my files' content being base64 encoded into my state files. |
I got headache trying to find any other possible workaround for this... but couldn't think of any that wouldn't require broken state in between.(i.e. delete object before running terraform plan)... Please fix this... it would be sufficient if there were any way of triggering recreation with any attribute value but it seems that there is none. |
Hi, I got stuck in this issue and can't use |
Is there any other workaround than content_base64? I have glue code in s3 encrypted with kms and really don't want to have all that code in the terraform resource. |
I have a case where we download the jars from the artifactory & it gets uploaded to S3 bucket. When I deploy this twice, the first time it uploads the jar & second time when I update the jar version in URL it downloads the jars but fails to upload the object as etag evaluated as same
|
Same error, files are too big for base64 so need to taint them manually. Any progress on this issue? |
Waiting for PR #11522, a functional workaround is to use metadata:
|
This functionality has been released in v3.50.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Description
While discussing an issue with the etag that s3 objects receive from AWS
(#5033) and how that is not working when using encryption I came to wonder why the etag value of an S3 object is so naturally used to track local changes of a file compared to the object on S3.
Using the etag is unsafe in multiple cases - https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html !
My proposition is to move away from having the user provide the file md5 hash as an
etag value at all to recognize that the file has changed and instead to use a (md5 or rather sha256) hash of the local file and to store that in the terraform state after a successful upload?
This way doing the checksum on the source and comparing it to the terraform state should be enough to recognize a changed file and the requirement to update the S3 object.
Or am I missing anything / a certain use case of the etag to recognize file changes here?
Certainly the etag as a value provided by Amazon S3 allows to determine if the file has changed without terraform somehow, but that is not really an issue, as resources managed by terraform and their state must no be manipulated independently anyways. And certainly a refresh of the hash is always possible by simply downloading the file.
New or Affected Resource(s)
Potential Terraform Configuration
Adding a hash / changing the way a changed file is recognized should be transparent to the existing code. The referenced issue about incomplete documentation of the etag when using service side encryption is still valid though.
References
etag
can’t be used with any server encryption, but docs mention a different thing #5033The text was updated successfully, but these errors were encountered: