-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion about performance #54
Comments
No benchmarks. The main reason we bundle all the builds at once is to make sure it always works, including when install scripts are disabled. If you wanna test it yourself, sodium-native is the beefiest one I build (3.5mb total binaries). I'm sure you can find some using multiple roundtrips that fetches on-demand. |
As a non-scientific example here is some results from my computer on my network (Copenhagen, Denmark), YMMV
Note that after installing the "on-demand" libraries they still have to do another roundtrip to fetch the actual builds, |
If you wanna test a fatter one, try https://www.npmjs.com/package/rocksdb (42 MB unpacked) |
Thank you for the feedback I tested your fat package, in my machine the run time is about the same, for that 42MB unpacked and for a 12MB in two stages. Now I come with two other questions:
|
That would (with current npm abilities) require a postinstall script. I wouldn't want to add that requirement to
In an ideal world, maybe! That's a big task though, because the npm client would have to collect various data points (os, libuv version, ARM version, libc flavor, etc), allow them to be overridden in various scenario's (notably when installing packages on a machine that won't eventually run the code), and make it easy (enough) to add new data points. There are downsides to this approach too. For example, when switching between platforms (in Docker containers and what not) with a shared |
This is about documentation not a bug.
It is claimed that a bundle with binaries for multiple platforms is faster than an install script for a single platform.
This is reasonable for small modules, but not for large modules, do you have any benchmark data that you could share, it would be interesting to point this in the README for future readers. If not I would be happy to help in this evaluation.
Thank you
The text was updated successfully, but these errors were encountered: