-
Notifications
You must be signed in to change notification settings - Fork 46.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(cherrypick b09e102f) [Fizz] Prevent uncloned large precomputed chunks without relying on render-time assertions #28580 #28585
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ender-time assertions (facebook#28568) A while back we implemented a heuristic that if a chunk was large it was assumed to be produced by the render and thus was safe to stream which results in transferring the underlying object memory. Later we ran into an issue where a precomputed chunk grew large enough to trigger this hueristic and it started causing renders to fail because once a second render had occurred the precomputed chunk would not have an underlying buffer of bytes to send and these bytes would be omitted from the stream. We implemented a technique to detect large precomputed chunks and we enforced that these always be cloned before writing. Unfortunately our test coverage was not perfect and there has been for a very long time now a usage pattern where if you complete a boundary in one flush and then complete a boundary that has stylehsheet dependencies in another flush you can get a large precomputed chunk that was not being cloned to be sent twice causing streaming errors. I've thought about why we even went with this solution in the first place and I think it was a mistake. It relies on a dev only check to catch paired with potentially version specific order of operations on the streaming side. This is too unreliable. Additionally the low limit of view size for Edge is not used in Node.js but there is not real justification for this. In this change I updated the view size for edge streaming to match Node at 2048 bytes which is still relatively small and we have no data one way or another to preference 512 over this. Then I updated the assertion logic to error anytime a precomputed chunk exceeds the size. This eliminates the need to clone these chunks by just making sure our view size is always larger than the largest precomputed chunk we can possibly write. I'm generally in favor of this for a few reasons. First, we'll always know during testing whether we've violated the limit as long as we exercise each stream config because the precomputed chunks are created in module scope. Second, we can always split up large chunks so making sure the precomptued chunk is smaller than whatever view size we actually desire is relatively trivial.
facebook-github-bot
added
CLA Signed
React Core Team
Opened by a member of the React Core Team
labels
Mar 19, 2024
ztanner
added a commit
to vercel/next.js
that referenced
this pull request
Mar 19, 2024
Updates React from 60a927d04 to 4b84f1161. ### React upstream changes - facebook/react#28585 Closes NEXT-2865
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
(cherrypick b09e102 on 60a927d)
[Fizz] Prevent uncloned large precomputed chunks without relying on render-time assertions (#28568)
A while back we implemented a heuristic that if a chunk was large it was assumed to be produced by the render and thus was safe to stream which results in transferring the underlying object memory. Later we ran into an issue where a precomputed chunk grew large enough to trigger this hueristic and it started causing renders to fail because once a second render had occurred the precomputed chunk would not have an underlying buffer of bytes to send and these bytes would be omitted from the stream. We implemented a technique to detect large precomputed chunks and we enforced that these always be cloned before writing. Unfortunately our test coverage was not perfect and there has been for a very long time now a usage pattern where if you complete a boundary in one flush and then complete a boundary that has stylehsheet dependencies in another flush you can get a large precomputed chunk that was not being cloned to be sent twice causing streaming errors.
I've thought about why we even went with this solution in the first place and I think it was a mistake. It relies on a dev only check to catch paired with potentially version specific order of operations on the streaming side. This is too unreliable. Additionally the low limit of view size for Edge is not used in Node.js but there is not real justification for this.
In this change I updated the view size for edge streaming to match Node at 2048 bytes which is still relatively small and we have no data one way or another to preference 512 over this. Then I updated the assertion logic to error anytime a precomputed chunk exceeds the size. This eliminates the need to clone these chunks by just making sure our view size is always larger than the largest precomputed chunk we can possibly write. I'm generally in favor of this for a few reasons.
First, we'll always know during testing whether we've violated the limit as long as we exercise each stream config because the precomputed chunks are created in module scope. Second, we can always split up large chunks so making sure the precomptued chunk is smaller than whatever view size we actually desire is relatively trivial.