Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws / provisioner/file: allow file upload to s3 bucket #1884

Closed
JeanMertz opened this issue May 9, 2015 · 16 comments
Closed

provider/aws / provisioner/file: allow file upload to s3 bucket #1884

JeanMertz opened this issue May 9, 2015 · 16 comments

Comments

@JeanMertz
Copy link
Contributor

It would be nice if we could upload files to S3 using the file provisioner.

Use case:

I have a tarred OpenVPN configuration that I need to place on the servers. I can't use the ssh connection, because the cluster is in a VPC which doesn't allow ssh'ing from outside the VPC. The VPN starts automatically on cluster creation, so eventually I can connect to the OpenVPN server and ssh from inside the VPC, but for this to work, the VPN data file needs to be available before OpenVPN can start.

So I need a way to make private data available on the machine, and the best way inside AWS to do that would be to use a bucket to store this data and combine that with a Terraform controlled IAM role to give access to this data.

(on other providers like OpenStack I just send the file along with the provided user-data, but AWS has a 16KB user-data limit, so that is not an option)

@radeksimko
Copy link
Member

Hmm, interesting idea, I think that's a valid use-case.

I'm not convinced that this should be a provisioner though.
In context of aws_s3_bucket, I'd be more thinking about something like aws_s3_file.

Either way, there's a few caveats to be solved in either one of these solutions - you cannot delete an S3 bucket until you empty the whole bucket. i.e. if you gonna reuse that bucket for something else than just the OpenVPN config, what would you expect Terraform to do when tearing down the infrastructure? Not to delete all your data I assume?

If we allow using an existing S3 bucket that user created outside of TF, similar set of questions need to be answered:

  1. what to do if the file already exists in the same path? Show an error or rewrite the file?
  2. what to do if I'm destroying the OpenVPN resource + all related dependencies defined in TF -> should TF remove that file from S3?

@JeanMertz
Copy link
Contributor Author

@radeksimko yes, that's why I had both provider/aws and provisioner/file in the title. I was also thinking that this might be an aws resource. However, I hadn't seen any other resources used for "provisioning", so I wasn't sure if this was "allowed" within the context of resources.

Regarding file management. That is a good question. Ofcourse my use-case is different than others, but I feel that "destroying" your infrastructure means just that, destroying it. If you had persistent data on a machine that would be lost as well, so the same could be said for the S3 bucket.

In regards to overwriting, I would assume that it overwrites, and in fact changing the "content" of the file to be uploaded would equal a "needs to be recreated" action, and thus replacing that file.

@mitchellh
Copy link
Contributor

I agree with @radeksimko. It should be a separate resource, but is a good idea.

@apparentlymart
Copy link
Contributor

I've found myself wanting this too. I was imagining something like this:

resource "aws_s3_object" "config" {
    bucket = "config"
    key = "example-config"
    body = "${template_file.example_config.rendered}"
    region = "us-west-1"
}

This could then have similar CRUD behavior as the consul_keys resource, which is serving a very similar purpose for values in the Consul key-value store.

@apparentlymart
Copy link
Contributor

After some further thinking, I'm starting to think that there are two different use-cases here:

  • Writing some details about Terraform-managed resources into S3 in the same manner as you might write them into Consul, like I showed in my earlier example.
  • Bulk-loading a directory full of static files into an S3 bucket as part of deploying a static website. In this case, it seems unlikely that you'd want to represent each separate file as its own Terraform resource, and thus the idea I posted earlier doesn't feel right.

I have some cases where we want to stash some Terraform results that need to work before our Consul cluster is up, so the former would be pretty useful to me as a more-flexible alternative to the S3 remote state backend in 0.5.0.

I also have some static sites that I'm hoping to eventually deploy via S3 static publishing. We're trying to standardize on using Terraform for all deployment, regardless of what what sort of container we're deploying into, so right now deploying static sites on S3 is a functionality hole for us but I hadn't yet thought much about it since our static sites are towards the end of our queue of things to migrate over to Terraform-based deployment. (today we have some hacky scripts for scping files onto a selection of static web server hosts, but we're trying to get away from random scripts as tooling.)

@m-s-austin
Copy link
Contributor

I have a work in progress on this issue:
iJoinSolutions#3

I will make a PR on hashicorp once I figure out why my acceptance test is failing, I think I am close to solving that.

@m-s-austin
Copy link
Contributor

#2079

Tested and working well so far. I have a resource bundle of a couple hundred files that are synced and destroyed to the bucket as expected.

@bdesilva
Copy link

This is what I need! How do I get this build and utilize the s3 object resource?

@apparentlymart
Copy link
Contributor

A resource for this was added in #2898, which was released as part of Terraform 0.6.2.

@amontalban
Copy link

Hey guys,

Thanks for this new feature, I've been waiting for it for a while. Now that we have this feature it would be great if aws_s3_bucket can export name as well as id so you can do something like this:

resource "aws_s3_bucket" "ws-staging-bucket" {
  bucket = "ws-staging-bucket-useast1"
  acl = "private"

  tags {
      Name = "ws-staging-bucket-useast1"
      Environment = "STAGING"
  }
}

resource "aws_s3_bucket_object" "file1" {
    bucket = "${aws_s3_bucket.ws-staging-bucket.name}"
    key = "file1"
    source = "files/file1"

    depends_on = ["aws_s3_bucket.ws-staging-saltmaster.id"]
}

Should I open a new issue for this?

Thanks!

@apparentlymart
Copy link
Contributor

@amontalban did you try interpolatinog ${aws_s3_bucket.ws-staging-bucket.bucket}? I think that should work, since all of the configuration parameters are also attributes.

(It's kinda weird that the bucket name parameter is called "bucket", and not "name" like with most other resources.)

@amontalban
Copy link

@apparentlymart you're right using ${aws_s3_bucket.ws-staging-bucket.bucket} works, didn't know that I can also use configuration parameters as attributes.

+1 to know why it's called "bucket" instead of "name" like others.

Thanks!

@ringods
Copy link
Contributor

ringods commented Oct 14, 2015

@apparentlymart the aws_s3_bucket_object needs a path to a file as input. How I can pass a rendered template_file as input?

@m-s-austin
Copy link
Contributor

source = "files/file1"

also that's why I didn't call it name. because there are two names. Bucket Name and Source Name

@apparentlymart
Copy link
Contributor

I think all of the use-cases in this issue have now been addressed in one way or another:

As of this writing the change in #3200 hasn't yet been included in a release, but it has been merged and should be included in the next one.

I think I spotted everything in this issue so I'm going to close this; please let me know if I missed something.

@ghost
Copy link

ghost commented Apr 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants