-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow internet causes files to be deleted because of timeout #683
Comments
It seems the 2 minute limitation is hard coded into the SDK, overriding the value int he SDK itself to 10 minutes the chunks successfully upload without fail. However the timeout cannot be modified from outside the SDK. var managedUpload = new AWS.S3.ManagedUpload({
queueSize: options.aWSQueueSize || 4,
params: params,
httpOptions: {
timeout: 60 * 1000 * 10 // 10 minutes
}
}); The httpOptions specified in http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property |
Ok I have to classify this as a bug until said otherwise. The documentation suggests the value can be changed. But the SDK does not honor the value passed in. The 2 minute limit is too low for us and we need to be able to increase it to 3-5 minutes. |
OH it looks like you can configure it globally with: AWS.config.httpOptions = {
xhrWithCredentials: false,
xhrAsync: true,
timeout: 60 * 1000 * 10
}; Is this sort of config acceptable? It seems a bit backward from the documentation. I would have assumped it should be done on line configure: function configure(options) {
options = options || {};
this.partSize = this.minPartSize;
if (options.queueSize) this.queueSize = options.queueSize;
if (options.partSize) this.partSize = options.partSize;
if (options.leavePartsOnError) this.leavePartsOnError = true;
if (this.partSize < this.minPartSize) {
throw new Error('partSize must be greater than ' +
this.minPartSize);
}
this.service = options.service;
this.bindServiceObject(options.params);
this.validateBody();
this.adjustTotalBytes();
}, |
Updating the global configuration might be acceptable based on your application, and should work, however,
httpOptions is not read from the Specifically, if you want to pass options only to the uploader, you can pass your own var uploader = new AWS.S3.ManagedUpload({
service: AWS.S3({httpOptions: {timeout: 60 * 1000 * 10}})
}); Note that you can also use the var s3 = new AWS.S3({httpOptions: {...}});
s3.upload(params, callback); |
Awesomesauce. I tried the Thanks for the clarification. I find the configuration documentation a little lacking :( hopefully it can be improved a bit. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread. |
I have a problem, and I'm not entirely sure where to go with this. I've read all the documentation on uploading 10 times and I can't figure out if this is actually a bug or
Basically: We do uploading from the browser directly to S3, in the region of about 30-100gb a day, with files ranging from 50mb to 1.5gb.
Initially we used the default settings which allowed for chunking of the large files to S3 with multiple chunks being uploaded at a time.
In Australia and New Zealand we have a problem where the internet is a little slower, so this causes timeouts on some individual chunks. We partially fixed this by changing the chunking to upload 1 at a time.
So we have 1000's of users uploading to S3, 100's that failed but were fixed by changing the chunking to 1.
But now we have 10's of users who still fail to upload.
If we give them a console app which selects a 50mb file and uploads directly to S3, it works perfectly fine using the C# SDK.
However when using the browser (IE 10+ or latest Firefox/Chrome/Safari) uploading directly to S3 with 1 chunk at a time with default settings.
7 in 10 uploads will fail.
Now as far as I'm aware there is a time limit on the upload to say that it needs to be uploaded within 2 minutes. Most chunks they upload are done within this 2 minutes, but sometimes* 1 of those chunks fails.
Is it possible to verify a chunk was uploaded, and if not, retry that chunk.
How can I get around the 5mb or 2minute limitations?
The text was updated successfully, but these errors were encountered: