Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to publish components of size > 500MB #514

Closed
JuniorDev4Lyf opened this issue Jun 7, 2017 · 10 comments
Closed

Not able to publish components of size > 500MB #514

JuniorDev4Lyf opened this issue Jun 7, 2017 · 10 comments

Comments

@JuniorDev4Lyf
Copy link
Contributor

JuniorDev4Lyf commented Jun 7, 2017

I am not able to publish a component (which is of size 570 MB) to AWS s3 bucket from local computer (Mac OS X Sierra).

I often see the below error when it fails,

An error happened when publishing the component: Error: socket hang up

I tried to publish the component in below manner,


node index.js bucket= region= key= secret=
oc registry add http://localhost:3000
oc publish

Output:
Packaging -> goes fine
Compressing -> goes fine
Publishing -> component url
An error happened when publishing the component: Error: socket hang up

any workaround for this ??

@matteofigus
Copy link
Member

Hi @NapoleanReigns I never tried to publish a big component like that, but it should work. The file limits are exposed here: https://github.com/opentable/oc/blob/master/src/registry/middleware/file-uploads.js#L14 - I can try testing locally and see how it goes.

Out of curiosity, have you tried publishing other components with success?

@JuniorDev4Lyf
Copy link
Contributor Author

yes i am able to publish other (smaller in size) components successfully.

@matteofigus
Copy link
Member

Trying to reproduce locally, I managed to get this error during the upload of the huge file:

{ TimeoutError: Connection timed out after 10000ms
    at ClientRequest.<anonymous> (/Users/mfigus/Documents/gh/opentable/oc/node_modules/aws-sdk/lib/http/node.js:83:34)
    at ClientRequest.g (events.js:291:16)
    at emitNone (events.js:86:13)
    at ClientRequest.emit (events.js:185:7)
    at TLSSocket.emitTimeout (_http_client.js:629:10)
    at TLSSocket.g (events.js:291:16)
    at emitNone (events.js:86:13)
    at TLSSocket.emit (events.js:185:7)
    at TLSSocket.Socket._onTimeout (net.js:342:8)
    at ontimeout (timers.js:380:14)
  message: 'Connection timed out after 10000ms',
  code: 'NetworkingError',
  time: 2017-06-11T12:06:34.230Z,
  region: 'eu-west-1',
  hostname: 'oc-registry.s3-eu-west-1.amazonaws.com',
  retryable: true }

In my case, I have a big video. So, this is the first thing I'll try to handle.

@matteofigus
Copy link
Member

I got a PR opened for fixing various things.
After that being merged

  1. the options.s3.timeout needs to be amplified for large uploads. Default is 10s, for large files uploads possibly I recommend customising to 2m
  2. A new setting options.timeout added for the server to accept long connections to stay opened. Current default to 2m can be possibly augmented to 10m.

With server timeout set to 10m (so that we don't have express hanging with a ECONNRESET) and s3 timeout set to 2m (uploads happen in chunks after this PR so this may be even not strictly necessary) I managed to upload ~300MB components without problems (takes a couple of mins but all seems to work).

@matteofigus matteofigus self-assigned this Jun 11, 2017
@mattiaerre
Copy link
Member

@matteofigus so you think we should be able to publish components that big? I am not sure if this is the right approach.

@matteofigus
Copy link
Member

matteofigus commented Jun 11, 2017

@mattiaerre I think I prefer the user to have his own opinions. More specifically, the framework can have opinions in terms of defaults (10s for AWS reqs, and 2m for responses timeout seems like in line with having small and optimises resources in your components), but I like the idea of the contributor being able to change this settings to satisfy different business needs.

Also, 2 more things to keep in mind:

  1. The PR changes the AWS upload modality to be chunked, which is something more robust for relatively small files too (5-10MB videos/images are possibly very common in a legitimately small component too)
  2. The possibility to tweak on connection timeouts is possibly independent from the file dimension. When uploading from countries where connections are very slow or unstable, the same problem can apply to relatively small files too, and having the possibility to change these timeouts is a good feature to add in my opinion.

In conclusion, my main goal with #516 is to make some good optimisations that became visible after investigating the issue. If in parallel, I can also solve this, I am happy :)

@mattiaerre
Copy link
Member

I understand your point of view but my concern is that we should always pick up the right tool for the right job and to me hosting big assets inside component is not the right approach. As we will end up with huge components at every publish. Very good w/ the side effects w/ this PR btw this is really cool 😎

@JuniorDev4Lyf
Copy link
Contributor Author

JuniorDev4Lyf commented Jun 12, 2017

Hey guys just to give you some more info,

I ran into this issue the other day consistently with a component of around 270MB in size.
The component that i was publishing had some files (Javascript files) of around 10MB in size and the rest were in kbs. Its just that my component has too many files in numbers.

Does the number of files also matter here ??

@matteofigus
Copy link
Member

Yeah, 2m total for multiple s3 uploads could be not enough and making it configurable could be a quick win

@matteofigus
Copy link
Member

@NapoleanReigns I just published 0.37.10 which possibly should allow you to complete the upload for you (after changing the default settings as explained in my previous comment). I would recommend s3.timeout => 120000 (2m) and registry timeout 600000 (10m).

Can you try and let me know?
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants