-
Notifications
You must be signed in to change notification settings - Fork 49
File Upload #42
Comments
You could use direct browser upload. The bottom of the readme contains information about the helpers that are in the library. In short: your form posts directly to amazon. More info about that here: http://aws.amazon.com/articles/1434 Another option would be to use multipart uploading, where you only buffer pieces of the file. There might be interesting information here: #21 and here: #2 |
There's a lot of support and optimization in the AWS Java SDK for uploading files from disk up to S3. Why doesn't play-s3 expose it, and instead expect you to buffer your files into memory? |
We are not using the AWS Java SDK. Note that in the comments of #2 there is a suggestion for a solution that supports streaming using actors and multipart uploading. |
yes it's sadly true :(. This makes this lib unsuitable for serious production uses unfortunately as non-streamed file uploads have huge memory footprints :( |
I am attempting to stream a byte array (not a file on disk) to S3. Am I to conclude from this thread that this is not possible with the current API? I have a question along these lines with much more detail here: http://stackoverflow.com/questions/27586997/streaming-with-iteratees-enumerators-and-futures Thanks. |
I have answered your question with untested code. If you find a clean solution using |
I saw and responded briefly. Thanks for your help. |
Did you manage to solve it? If so I would like to introduce the solution as a utility in the upcoming |
Unfortunately, I was under time pressure to get the feature out and pursued an alternate course, so I wasn't able to figure anything out. Sorry about that. If I get time to play around and come up with something that isn't terrible, I will let you know. |
I'm closing this for now. |
https://github.com/Rhinofly/play-s3/blob/f612f542596056c6d709f5634f0009c69f1d209a/src/main/java/fly/play/s3/Bucket.scala#L251
From what I'm reading, it looks like we always have to buffer the entire file into memory before we can upload the file to S3. Is this correct, or am I missing an API?
The text was updated successfully, but these errors were encountered: