Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow internet causes files to be deleted because of timeout #683

Closed
phillip-haydon opened this issue Aug 17, 2015 · 6 comments
Closed

Slow internet causes files to be deleted because of timeout #683

phillip-haydon opened this issue Aug 17, 2015 · 6 comments
Labels
guidance Question that needs advice or information.

Comments

@phillip-haydon
Copy link

I have a problem, and I'm not entirely sure where to go with this. I've read all the documentation on uploading 10 times and I can't figure out if this is actually a bug or

Basically: We do uploading from the browser directly to S3, in the region of about 30-100gb a day, with files ranging from 50mb to 1.5gb.

Initially we used the default settings which allowed for chunking of the large files to S3 with multiple chunks being uploaded at a time.

In Australia and New Zealand we have a problem where the internet is a little slower, so this causes timeouts on some individual chunks. We partially fixed this by changing the chunking to upload 1 at a time.

var managedUpload = new AWS.S3.ManagedUpload({
    queueSize: options.aWSQueueSize || 4,
    params: params
});

So we have 1000's of users uploading to S3, 100's that failed but were fixed by changing the chunking to 1.

But now we have 10's of users who still fail to upload.

If we give them a console app which selects a 50mb file and uploads directly to S3, it works perfectly fine using the C# SDK.

However when using the browser (IE 10+ or latest Firefox/Chrome/Safari) uploading directly to S3 with 1 chunk at a time with default settings.

7 in 10 uploads will fail.


Now as far as I'm aware there is a time limit on the upload to say that it needs to be uploaded within 2 minutes. Most chunks they upload are done within this 2 minutes, but sometimes* 1 of those chunks fails.


Is it possible to verify a chunk was uploaded, and if not, retry that chunk.

How can I get around the 5mb or 2minute limitations?

@phillip-haydon
Copy link
Author

It seems the 2 minute limitation is hard coded into the SDK, overriding the value int he SDK itself to 10 minutes the chunks successfully upload without fail.

However the timeout cannot be modified from outside the SDK.

var managedUpload = new AWS.S3.ManagedUpload({
    queueSize: options.aWSQueueSize || 4,
    params: params,
    httpOptions: {
        timeout: 60 * 1000 * 10 // 10 minutes
    }
});

The httpOptions specified in http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property

@phillip-haydon
Copy link
Author

Ok I have to classify this as a bug until said otherwise. The documentation suggests the value can be changed. But the SDK does not honor the value passed in.

The 2 minute limit is too low for us and we need to be able to increase it to 3-5 minutes.

@phillip-haydon
Copy link
Author

OH it looks like you can configure it globally with:

AWS.config.httpOptions = {
    xhrWithCredentials: false,
    xhrAsync: true,
    timeout: 60 * 1000 * 10
};

Is this sort of config acceptable? It seems a bit backward from the documentation.

I would have assumped it should be done on line 3449 which configures the ManagedUpload request:

  configure: function configure(options) {
    options = options || {};
    this.partSize = this.minPartSize;

    if (options.queueSize) this.queueSize = options.queueSize;
    if (options.partSize) this.partSize = options.partSize;
    if (options.leavePartsOnError) this.leavePartsOnError = true;

    if (this.partSize < this.minPartSize) {
      throw new Error('partSize must be greater than ' +
                      this.minPartSize);
    }

    this.service = options.service;
    this.bindServiceObject(options.params);
    this.validateBody();
    this.adjustTotalBytes();
  },

@lsegal
Copy link
Contributor

lsegal commented Aug 17, 2015

Updating the global configuration might be acceptable based on your application, and should work, however,

I would have assumped it should be done on line 3449 which configures the ManagedUpload request:

httpOptions is not read from the AWS.S3.ManagedUpload constructor, it is read from the AWS.S3 constructor, which is a different object. The documentation for AWS.S3.ManagedUpload's constructor shows the valid options.

Specifically, if you want to pass options only to the uploader, you can pass your own AWS.S3 object with this config via the service param:

var uploader = new AWS.S3.ManagedUpload({
  service: AWS.S3({httpOptions: {timeout: 60 * 1000 * 10}})
});

Note that you can also use the AWS.S3.upload() method to initiate managed uploads, which will do this for you:

var s3 = new AWS.S3({httpOptions: {...}});
s3.upload(params, callback);

@phillip-haydon
Copy link
Author

Awesomesauce.

I tried the s3.upload but couldn't get the progress update working so I went with the service way. Working fine.

Thanks for the clarification. I find the configuration documentation a little lacking :( hopefully it can be improved a bit.

@srchase srchase added guidance Question that needs advice or information. and removed Question labels Jan 4, 2019
@lock
Copy link

lock bot commented Sep 28, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
guidance Question that needs advice or information.
Projects
None yet
Development

No branches or pull requests

3 participants