-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider/aws / provisioner/file: allow file upload to s3 bucket #1884
Comments
Hmm, interesting idea, I think that's a valid use-case. I'm not convinced that this should be a provisioner though. Either way, there's a few caveats to be solved in either one of these solutions - you cannot delete an S3 bucket until you empty the whole bucket. i.e. if you gonna reuse that bucket for something else than just the OpenVPN config, what would you expect Terraform to do when tearing down the infrastructure? Not to delete all your data I assume? If we allow using an existing S3 bucket that user created outside of TF, similar set of questions need to be answered:
|
@radeksimko yes, that's why I had both Regarding file management. That is a good question. Ofcourse my use-case is different than others, but I feel that "destroying" your infrastructure means just that, destroying it. If you had persistent data on a machine that would be lost as well, so the same could be said for the S3 bucket. In regards to overwriting, I would assume that it overwrites, and in fact changing the "content" of the file to be uploaded would equal a "needs to be recreated" action, and thus replacing that file. |
I agree with @radeksimko. It should be a separate resource, but is a good idea. |
I've found myself wanting this too. I was imagining something like this: resource "aws_s3_object" "config" {
bucket = "config"
key = "example-config"
body = "${template_file.example_config.rendered}"
region = "us-west-1"
} This could then have similar CRUD behavior as the |
After some further thinking, I'm starting to think that there are two different use-cases here:
I have some cases where we want to stash some Terraform results that need to work before our Consul cluster is up, so the former would be pretty useful to me as a more-flexible alternative to the S3 remote state backend in 0.5.0. I also have some static sites that I'm hoping to eventually deploy via S3 static publishing. We're trying to standardize on using Terraform for all deployment, regardless of what what sort of container we're deploying into, so right now deploying static sites on S3 is a functionality hole for us but I hadn't yet thought much about it since our static sites are towards the end of our queue of things to migrate over to Terraform-based deployment. (today we have some hacky scripts for |
I have a work in progress on this issue: I will make a PR on hashicorp once I figure out why my acceptance test is failing, I think I am close to solving that. |
Tested and working well so far. I have a resource bundle of a couple hundred files that are synced and destroyed to the bucket as expected. |
This is what I need! How do I get this build and utilize the s3 object resource? |
A resource for this was added in #2898, which was released as part of Terraform 0.6.2. |
Hey guys, Thanks for this new feature, I've been waiting for it for a while. Now that we have this feature it would be great if aws_s3_bucket can export name as well as id so you can do something like this: resource "aws_s3_bucket" "ws-staging-bucket" {
bucket = "ws-staging-bucket-useast1"
acl = "private"
tags {
Name = "ws-staging-bucket-useast1"
Environment = "STAGING"
}
}
resource "aws_s3_bucket_object" "file1" {
bucket = "${aws_s3_bucket.ws-staging-bucket.name}"
key = "file1"
source = "files/file1"
depends_on = ["aws_s3_bucket.ws-staging-saltmaster.id"]
} Should I open a new issue for this? Thanks! |
@amontalban did you try interpolatinog (It's kinda weird that the bucket name parameter is called "bucket", and not "name" like with most other resources.) |
@apparentlymart you're right using +1 to know why it's called "bucket" instead of "name" like others. Thanks! |
@apparentlymart the |
also that's why I didn't call it name. because there are two names. Bucket Name and Source Name |
I think all of the use-cases in this issue have now been addressed in one way or another:
As of this writing the change in #3200 hasn't yet been included in a release, but it has been merged and should be included in the next one. I think I spotted everything in this issue so I'm going to close this; please let me know if I missed something. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
It would be nice if we could upload files to S3 using the
file
provisioner.Use case:
I have a tarred OpenVPN configuration that I need to place on the servers. I can't use the
ssh
connection, because the cluster is in a VPC which doesn't allow ssh'ing from outside the VPC. The VPN starts automatically on cluster creation, so eventually I can connect to the OpenVPN server and ssh from inside the VPC, but for this to work, the VPN data file needs to be available before OpenVPN can start.So I need a way to make private data available on the machine, and the best way inside AWS to do that would be to use a bucket to store this data and combine that with a Terraform controlled IAM role to give access to this data.
(on other providers like OpenStack I just send the file along with the provided
user-data
, but AWS has a 16KB user-data limit, so that is not an option)The text was updated successfully, but these errors were encountered: