-
-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(optimizer): holdUntilCrawlEnd option #15244
Conversation
Run & review this pull request in StackBlitz Codeflow. |
@@ -160,7 +182,7 @@ async function createDepsOptimizer( | |||
// On warm start or after the first optimization is run, we use a simpler | |||
// debounce strategy each time a new dep is discovered. | |||
let crawlEndFinder: CrawlEndFinder | undefined | |||
if (isBuild || !cachedMetadata) { | |||
if ((!noDiscovery && isBuild) || !cachedMetadata) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When noDiscovery: true
, we don't need the crawlEndFinder
. This simplifies the logic in onCrawlEnd
.
const manuallyIncluded = Object.keys(manuallyIncludedDepsInfo) | ||
discoveredDepsWhileScanning.push( | ||
...Object.keys(metadata.discovered).filter( | ||
(dep) => !deps[dep] && !manuallyIncluded.includes(dep), | ||
), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case we issue the warning to add dependencies to optimize.include
, I think we should also ask the user to add the deps discovered while the scanner was running. The scanner may end before some of these are found during the crawling of static imports in future cold starts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC you mean we also log the dependencies found by the scanner? (Not the static imports crawler) I'm not sure if it would be ideal to request adding them to optimizeDeps.include
manually. I think it would be enough to request from the crawler only.
In general, I think missed dependencies could mean a bug in our code (e.g. new URL('./somewhere', import.meta.url)
which I never got around fixing 😅 ), or a optimizeDeps.entries
misconfiguration. So we can be a bit conservative in requesting adding to optimizeDeps.include
and try to fix the root cause instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, if something is found by the scanner, it is fine. I mean logging the dependencies that were not found by the scanner or manually included. But some of them may have been found before the scanner finished and right now we are also using them on the first optimize run. discoveredDepsWhileScanning
should maybe be named missedDepsWhileScanning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok thanks for the explanation. I don't have a preference for the names, but whichever you think fits better here.
if (warnAboutMissedDependencies) { | ||
logDiscoveredDepsWhileScanning() | ||
config.logger.info( | ||
colors.magenta( | ||
`❗ add these dependencies to optimizeDeps.include to avoid a full page reload during cold start`, | ||
), | ||
{ timestamp: true }, | ||
) | ||
warnAboutMissedDependencies = false | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could ask the user to only add the dependencies that triggered the full-page reload (the ones that have custom chunks). But adding all the dependencies is better as it will avoid a new optimize run and let the browser download these dependencies as soon as the scan+optimize step is done. So better to ask for all of them to be included.
if (isBuild) { | ||
currentlyProcessing = false | ||
const crawlDeps = Object.keys(metadata.discovered) | ||
if (crawlDeps.length === 0) { | ||
debug?.( | ||
colors.green( | ||
`✨ no dependencies found while processing user modules`, | ||
), | ||
) | ||
firstRunCalled = true | ||
} else { | ||
runOptimizer() | ||
} | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved the build time logic here to simplify the dev logic. The comment could also be improved and during build we don't need the debounce. We want to remove this anyways, but the change helped with this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to understand the optimizer flow, but I don't quite grasp it 😅 Other than that, I tested locally and it seems to work well though
In a future minor (so we get some time for testing it in a beta), we should start from scratch and move all this code to a state machine. It ended up being quite hard to follow after all the accumulated changes. |
(I have the review of this one on my todo, I will review it this weekend I think) |
Description
After Vite 3.0, our cold start strategy changed as described in #8869
An issue of this strategy is that for the default non-https server, the browser can only request 6 concurrent requests. Each optimized dep found takes one of these requests, so requests could stall until the crawling of static deps is finished in the server. For large projects, this means that the browser is potentially idle when it could be downloading and parsing requests. We did this to avoid full-page reloads when the scanner misses a dependency and forces a full reload. But we are paying a cost in all projects where all deps are discovered by the scanner or manually included.
This PR adds a new option
optimizeDeps.holdUntilCrawlEnd
. Iftrue
(the default), it uses the current strategy. Iffalse
, it releases to the browser the optimized deps computed after the scanner as soon as they are available. We avoid the possibility of the browser being idle while the server crawls static imports, with the trade-off that if dependencies are missed by the scanner, there could be a full-page reload.There are some differences with a similar strategy we had in Vite 2:
optimizeDeps.include
and avoid full-page reloads.We discussed these strategies with @bluwy and @ArnaudBarre, and this PR could be a first step to change the default strategy for cold start to
holdUntilCrawlEnd: false
in a future minor, depending on the feedback we get from users.What is the purpose of this pull request?