-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve chunking performance #4862
Conversation
✅ Deploy Preview for rollupjs ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
0dfce89
to
8400c4a
Compare
Thank you for your contribution! ❤️You can try out this pull request locally by installing Rollup via npm install rollup/rollup#improve-chunking-performance or load it into the REPL: |
f9798d3
to
ccab89a
Compare
Codecov Report
@@ Coverage Diff @@
## master #4862 +/- ##
=======================================
Coverage 98.97% 98.98%
=======================================
Files 219 219
Lines 7927 7943 +16
Branches 2195 2189 -6
=======================================
+ Hits 7846 7862 +16
Misses 26 26
Partials 55 55
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
a149261
to
f1f41fe
Compare
f1f41fe
to
ed9f0f9
Compare
ed9f0f9
to
c1d8b8d
Compare
This PR has been released as part of rollup@3.17.0. You can test it via |
This PR contains:
Are tests included?
Breaking Changes?
output.experimentalDeepDynamicChunkOptimization
is deprecated and no longer doing anything and shows a warning instead.List any relevant issue numbers:
Description
Rollup has an advanced chunking algorithm that can detect if a dependency of a dynamic entry that is shared with the dynamic importer must already be in memory when the dynamic import is loaded. In such a scenario, it will not create a separate chunk for the dependency but import it from the importing chunk.
This can avoid quite a few chunks, but as #4740 showed, the algorithm has a big problem: There is at least a
O(e*d*m)
complexity withd
the number of dynamic imports,e
the number of entry points andm
the number of modules. And as it turned out for large projects, this could completely blow up.A stop-gap measure was to introduce the
output.experimentalDeepDynamicChunkOptimization
option to make the algorithm "dumber" but much faster. This helped performance, but it created quite a few unnecessary chunks for some.However, I managed to completely rewrite the original algorithm to solve all problems! And part of the solution was to use
BigInt as a high-performance Set replacement!
How does it work? Assume you have a fixed number of objects that you can index with numbers. As BigInts have arbitrary precision, you can then assign each object a bit in the BigInt. So to add element 24, I would do
Note the single
|
which is a bitwise OR. But the true power is if I have to compare, merge or intersect such sets:The only thing that is worse for BigInt sets is that you cannot easily iterate Set elements. Either you iterate over all bits, which is unnecessarily many, or one devises some devilish divide-and-conquer scheme that still has for a Set with a single element O(log n) complexity. But luckily, I did not need iteration everywhere.
By restructuring the algorithm make heavy use of such intersections and merges and some other algorithmic improvements, I was able to severely improve performance. Using the example of #4740 as a baseline, I got the following numbers:
#4740 has 1 static entry, 1450 dynamic entries and no manual chunks.
getChunkAssignments
with the current algorithm andexperimentalDeepDynamicChunkOptimization
disabled:experimentalDeepDynamicChunkOptimization
enabled: