-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 backend on nonAWS implementation (OCI) not working in 1.6.3 #34053
Comments
I can get it to initialize but apply and destroy fails to persist data to the backend. It creates some checksum content errors. For testing i used hard-coded secrets: backend "s3" {
bucket = "<BUCKET>"
key = "<FILENAME>"
region = "eu-frankfurt-1"
endpoints = {
s3 = "https://my-dell-s3.com"
}
access_key = "<MY_KEY>"
secret_key = "<MY_SECRET>"
skip_region_validation = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
use_path_style = true
} As I said init works but plan and destroy fails with:
Why does Terraform not check if it can successfully put an object (with content SHA check) before executing the entire plan. Either before apply or even better while doing init. Honestly all these S3 related errors do not shine a good light on hashicorps testing processes. These are not some abstruse functions that do not work, its basic functionality. |
I didn't try with hardcoded credentials because it's an hard requirement for me to have them in a file, but now I'm even less inclined to do more tests. thanks for the warning @hegerdes |
@hegerdes For what it's worth, I'm seeing the same error using Ceph's S3 compatible storage. Because @12345ieee seems to have a slightly different problem, I opened a separate bug at #34086. |
Same issue with digital ocean spaces: terraform {
backend "s3" {
skip_region_validation = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
use_path_style = true
endpoints = {
s3 = "https://fra1.digitaloceanspaces.com"
}
region = "fra1" // needed
bucket = "thorauto-terraform"
key = "terraform.tfstate"
}
} My log actually contains credentials and it's hard to cut it off as it's loads of them in that debug dump. When initializing terraform with
404 in here which probably means it can read files, but when trying to do
|
Same issue here... |
Unfortunately we are unable to test changes to the S3 backend against the various S3-compatible storage providers, though it is never our intention to break existing workflows. I would recommend raising this with the upstream providers as the behavior has been confirmed to work correctly with Amazon S3. As part of the S3 Backend maintenance we did move from using v1 of the AWS Go SDK to v2, specifically adopting the S3 Manager feature which improves performance on large state files. It's possible that these providers have not implemented this functionality. We may be able to help resolve these issues with more information from those provider teams, but at this time cannot commit to investigating further. |
@jar-b Thanks for the statement but this is not a great perspective going forward. |
@hegerdes Thanks for the feedback. For vendors which offer "S3-compatible" services, the burden of compatibility falls on those vendors. HashiCorp is supporting a backend for the AWS S3 service, and is leveraging the Golang AWS SDK to do so. As AWS updates its SDK, other competing services may fall behind on compatibility for some amount of time. We plan to update the S3 backend documentation to make this nuance of using the S3 backend more explicit. I apologize for any frustration this may cause. Thanks again for your continued feedback on this issue! |
Hello everyone. @12345ieee's initial report was a failure due to failing authentication:
A number of other people have reported different errors with their use of "S3-compatible" services, all related to the error Typically, we ask for separate issues for separate problems. In this case, we already have several issues related to |
@12345ieee, can you please share your shared credentials file (with sensitive values blanked out) |
Sure @gdavison , here you go:
|
Seeing slightly different errors, following thread. Hashicorp: Please revert changes to s3 backend provider, create s3_v2 provider or some such solution going forward. We may look to accelerate moving our states to Artifactory.
|
@rp-jasonp - you may also need to set the https://developer.hashicorp.com/terraform/language/settings/backends/s3#skip_requesting_account_id |
No dice.
Oracle's official doc reference: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformUsingObjectStore.htm |
@rp-jasonp, since this is a different problem from @12345ieee's (and not related to |
Same error here:
|
I recently ran into issues with mountpoint-s3 not working with Ceph RADOSGW with the |
I do use OCI and its failing with same latest error , can some one please look into it?
|
I know it wasn't the aim of 1.6.3, but that didn't solve the issue on OCI (Oracle Cloud) Object Storage as well. |
Any idea of what to do here? My setup just stopped working and I can't tell what do anymore. I seriously would appreciate any guidelines if you got it to work with Digital Ocean.
terraform {
backend "s3" {
bucket = "[REDACTED]"
endpoints = {
s3 = "https://nyc3.digitaloceanspaces.com"
}
key = "[REDACTED]/terraform.tfstate"
region = "us-east-1"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
skip_s3_checksum = true
skip_region_validation = true
}
} I am really disliking Hashicorp comment. I am not even a Ops dev so I can not afford such breaking changes when I have the very minimal skills to get things done, and I can't give up with such horrific way to treat the ecosystem to say the least. Really lazy to blame others for your failures. I do not appreciate it a bit! |
Hi @yordis, thank for your comment. Just a reminder to please follow the Community Guidelines when posting. As a quick suggestion, unless you specifically needed one of the new features from 1.6, you can continue to use |
I'm also furious in here, so I've contacted digital ocean regarding how they gonna deal with this. They said they gonna try to fix api on their side so that it would work with latest terraform. I believe we need to create a ticket for them in digitalocean/terraform-provider-digitalocean repo. Also I've asked for the update by email just now as they had internal discussion about this for the last couple of weeks. |
If people are furious, I believe something should be done about this. It's absolutely unacceptable to avoid backward compatibility in such manner. Also it's quite sad that hashicorp doesn't partner with top10 biggest cloud providers so they could adapt to upcoming changes, before executing such drastic changes which breaks everyone's infrastructure. And I speak about companies who has 10th of millions turnover per month. Our company needs infrastructure tests from latest version and I can't stick to the older one. Now I've split states and using different version on each of it with help of docker which is a real pain. But this will do for a while. But the policies which hashicorp is using doesn't play well with IT market. That's not how open source works especially when so many companies are dependent on you. Also who made a decision to rely on aws sdk for s3 remote states anyway? Use openstack as a baseground for s3, and everything will work everywhere. |
Ok here's update from Digital ocean:
|
On the Ceph RADOSGW side (which is the software that quite a lot of non-AWS S3 services use) there already are plans to implement the missing feature for the upcoming version: https://tracker.ceph.com/issues/63153#note-8 |
+1 for OCI, after apply
|
Hello. P.s. access_key secret_key just for demo, don't use it in real environment P.p.s from #34086 |
Works for me, thank you. |
I got it working with 1.6.6 and OCI. Essentially these… skip_credentials_validation = true
skip_region_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
skip_s3_checksum = true
use_path_style = true Note that I don't set Otherwise, critical for me was that I found the error also occurs if you have a remote state reference. You need those additions as well there. Your error about STS is probably directly related to missing data "terraform_remote_state" "remote" {
backend = "s3"
config = {
bucket = "devops"
key = "tfstate/terraform.tfstate"
region = "us-phoenix-1"
endpoints = {
s3 = "https://[REDACTED].compat.objectstorage.us-phoenix-1.oraclecloud.com"
}
skip_credentials_validation = true
skip_region_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
skip_s3_checksum = true
use_path_style = true
}
}
|
Encountered this with Terraform version ❯ terraform version
Terraform v1.7.4
on darwin_arm64
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/oracle/oci v5.30.0 Using OCI for remote state S3 backend and Oracle's documentation, I encountered the same surprising error message AFTER ╷
│ Error: Failed to save state
│
│ Error saving state: failed to upload state: operation error S3: PutObject, https response error StatusCode: 400, RequestID:
│ iad-1:B9UETKCkS7JqOriMsjfljnZXgJ_Nh6nqtl3R-VgJB5zhfj6mZueR-Vm_xviWX-e1, HostID: , api error InvalidArgument: x-amz-content-sha256
│ must be UNSIGNED-PAYLOAD or a valid sha256 value.
╵
╷
│ Error: Failed to persist state to backend
│
│ The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the
│ state has been written to the file "errored.tfstate" in the current working directory.
│
│ Running "terraform apply" again at this point will create a forked state, making it harder to recover.
│
│ To retry writing this state, use the following command:
│ terraform state push errored.tfstate
│
╵ Reading through the comments above, I added the ❯ terraform init -reconfigure
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of oracle/oci from the dependency lock file
- Using previously-installed hashicorp/null v3.1.0
- Using previously-installed oracle/oci v5.30.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
❯ terraform state push errored.tfstate And I can verify the state file is now present in my OCI bucket. > oci os object list -ns <my-bucket-namespace> -bn <my-bucket>
{
"data": [
{
"archival-state": null,
"etag": "da5b84c7-d80f-4fb1-b68c-14352985a975",
"md5": "PFvsKR2ny4ahwFY481W6KA==",
"name": "tf-landing-zone.tfstate",
"size": 5432,
"storage-tier": "Standard",
"time-created": "2024-03-07T07:14:39.992000+00:00",
"time-modified": "2024-03-07T07:14:39.992000+00:00"
}
],
"prefixes": []
} |
The missing checksum features / bugfixes were merged to Ceph master three weeks ago: ceph/ceph#54856. It's also planned to backport them to the Reef and Quincy releases, but that is still ongoing. |
Hi Im using TF/OCI, and I encounted this error today. Worked fine yesterday.. Error: Failed to get existing workspaces: Unable to list objects in S3 bucket "-terraform-state-": operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: fra-1:pWnsaVPzVkERWDRo*************RY3eb8kjAEbaajwyCvRZFjSzksi, HostID: , api error SignatureDoesNotMatch: The secret key required to complete authentication could not be found. The region must be specified if this is not the home region for the tenancy. My backend.tf: terraform { Tried everything, also old backend.tf with TF v1.5.x All development has stopped, as we cant deploy. Oracle Support says they have several reports on this. |
How to refer this access key and secret key from GitHub ontime runner file i.e /.AWS/credentials file here ... It tried directly by echo keys but it didn't work.. can pls suggest what could be the configuration in main.tf and in workflow for the backend |
terraform {
|
Hi, OCI must have something, as it works now :) Both TF's on 1.5.x and 1.6.x |
could you possibly share me the reference code for configuration, it is working if i put give the secret key and access key values directly in backend configuration, but that doesn't seems to be feasible way, can someone pls guide me, how values can be refereed for access key and secrete key |
I have credentials defined in terraform {
backend "s3" {
bucket = "devops"
key = "tfstate/terraform.tfstate"
region = "us-ashburn-1"
endpoints = { s3 = "https://REDACTED.compat.objectstorage.us-ashburn-1.oci.customer-oci.com" }
skip_region_validation = true
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
skip_s3_checksum = true
use_path_style = true
}
} I ensure environment variable Also ensure you use the same |
I can confirm this style works now, on 1.7.4: backend "s3" {
bucket = "[REDACTED]"
key = "[REDACTED]"
region = "eu-frankfurt-1"
endpoints = {
s3 = "https://[REDACTED].compat.objectstorage.eu-frankfurt-1.oraclecloud.com"
}
profile = "[REDACTED]"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
skip_requesting_account_id = true
skip_s3_checksum = true
use_path_style = true
} using the default credentials file, but no env vars. |
I have the same problem again.. Wrote to Oracle support Error: error loading state: SignatureDoesNotMatch: The secret key required to complete authentication could not be found. The region must be specified if this is not the home region for the tenancy. |
me too, it appears absolutely randomly |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Terraform Version
Terraform Configuration Files
Config that worked in v1.5.x:
my attempt to port it to v1.6.1, guided by the init warnings, but still failing,
skip_requesting_account_id
didn't help at all:Debug Output
See below
Expected Behavior
For my init to go through, like it did in tf 1.5.x with the old config.
In all honesty I'd have preferred not to need an init at all, but I can live with this.
Actual Behavior
The relevant log part is thankfully short:
$ TF_LOG=trace terraform init -reconfigure
Steps to Reproduce
$ terraform init -reconfigure
Additional Context
The documentation at https://developer.hashicorp.com/terraform/language/settings/backends/s3 has not been updated, that didn't help.
References
The text was updated successfully, but these errors were encountered: