Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow non-AWS S3 backends #15553

Merged
merged 1 commit into from
Oct 2, 2017
Merged

Conversation

bonifaido
Copy link
Contributor

This commit makes sts, metadata and other AWS related API calls optional, thus the backend initialization will not send non-AWS API tokens to AWS APIs.

This fixes #12377

This commit makes sts, metadata and other AWS related API calls optional, thus the backend initialization will not send non-AWS API tokens to AWS APIs
@Tyrael
Copy link

Tyrael commented Sep 28, 2017

ccing @rowleyaj and @jbardin for visibility

Copy link
Member

@jbardin jbardin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @bonifaido,

Thanks for the PR!

@jbardin jbardin merged commit d477d1f into hashicorp:master Oct 2, 2017
@lfarnell
Copy link

lfarnell commented Oct 6, 2017

@bonifaido Can I ask what non-AWS S3 you tested against? I believe that if your non-AWS S3 doesn't have a region that matches the regions available from AWS it will still fail. I can file an issue regarding this if that's the case.

@bonifaido
Copy link
Contributor Author

bonifaido commented Oct 6, 2017

@lfarnell I have used IBM Cloud Object Storage https://www.ibm.com/cloud-computing/bluemix/cloud-object-storage, it works well with the following configuration:

terraform {
  backend "s3" {
    endpoint = "http://s3.eu-geo.objectstorage.softlayer.net"
    region = "us-west-1" # Basically this gets ignored.
    profile = "cos-profile"
    bucket = "remote-state"
    key = "terraform"
    skip_requesting_account_id = true
    skip_credentials_validation = true
    skip_get_ec2_platforms = true
    skip_metadata_api_check = true
  }
}

This works because the endpoint decides which region are you using.

To be honest I haven't tried with anything else.

@trodemaster
Copy link

I have successfully tested this with cloudian s3 and it's now working using the above config! terraform Terraform v0.10.8-dev (1feb26f

@trodemaster
Copy link

I have attempted to configure the same working s3 backend as a terraform provider data source but it's failing with InvalidClientTokenId. Does anybody know if the terraform-provider-terraform plugin will need to be updated to work with this type of s3 backend config?

@tomzepf-oracle-zz
Copy link

Is there a way to force the use of bucket (path-style) endpoints instead of virtual hosted-style?

@RonnyMaas
Copy link

RonnyMaas commented Nov 5, 2017

This indeed works (Terraform v0.10.8). But it still needs a valid aws region even on a custom non aws endpoint.
I see commits for a new key "skip_region_validation". I assume this might solve this in a future release.

@RafPe
Copy link

RafPe commented Nov 6, 2017

@trodemaster I'm also trying to use cloudian S3 and unfortunately it does not work properly for me. I'm getting the same error message as you do :(

I have used the following config

data "terraform_remote_state" "cloudian_s3" {
   backend = "s3"
   config {
    skip_requesting_account_id = true
    skip_credentials_validation = true
    skip_get_ec2_platforms = true
    skip_metadata_api_check = true
    access_key = "00**11"
    secret_key = "xx***yyy"
    endpoint = "https://s3.some.domain.com"
    region = "us-west-1"
    bucket = "terraform"
    key = "vpc.admin/terraform.tfstate"
   }
}

Using Terraform v0.10.8

Error says

Error: Error refreshing state: 1 error(s) occurred:

* data.terraform_remote_state.cloudian_s3: 1 error(s) occurred:

* data.terraform_remote_state.cloudian_s3: data.terraform_remote_state.cloudian_s3: error initializing backend: InvalidClientTokenId: The security token included in the request is invalid.
	status code: 403

@Laxman-SM
Copy link

Laxman-SM commented Feb 19, 2018

@RafPe instead of access key use profile = "your profile name" . just append your endpoint provider and check once again. i fixed this with AWS on below mentioned ticket. try to upgrade terraform version to 0.11.2.
hashicorp/terraform-provider-aws#663

@peterver
Copy link

Tested and working with digitalocean spaces :

this was my config

terraform {
        required_version = ">= 0.11, < 0.12"
        backend "s3" {
                skip_requesting_account_id = true
                skip_credentials_validation = true
                skip_get_ec2_platforms = true
                skip_metadata_api_check = true
                access_key = "XXXXXXXXX"
                secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXX"
                endpoint = "https://xxx.digitaloceanspaces.com"
                region = "us-east-1"
                bucket = "XXXXXXX" // name of your space
                key = "production/terraform.tfstate"
        }
}
  • endpoint should be in the form of 'https://ams3.digitaloceanspaces.com' ( example for when using ams3 as a region )
  • bucket is the name of your space on digitalocean
  • access_key is the key you'll find on your api page for spaces (https://cloud.digitalocean.com/settings/api/tokens)
  • secret_key is the secret for the key ( not shown on the api page, but you can regenerate )
  • key is the path on your space ( folder structure ) and the file it should save to

@angristan
Copy link

I encountered a 403 issue while using Wasabi as a backend.

To debug, export TF_LOG=TRACE helped me. FYI, I missed the skip_credentials_validation = true option.

@ghost
Copy link

ghost commented Apr 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 1, 2020
@bonifaido bonifaido deleted the custom_s3_backend branch April 1, 2020 07:32
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New s3 backend config not working with non AWS s3 endpoint