Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fizz] Prevent uncloned large precomputed chunks without relying on render-time assertions #28568

Merged
merged 1 commit into from
Mar 16, 2024

Conversation

gnoff
Copy link
Collaborator

@gnoff gnoff commented Mar 15, 2024

A while back we implemented a heuristic that if a chunk was large it was assumed to be produced by the render and thus was safe to stream which results in transferring the underlying object memory. Later we ran into an issue where a precomputed chunk grew large enough to trigger this hueristic and it started causing renders to fail because once a second render had occurred the precomputed chunk would not have an underlying buffer of bytes to send and these bytes would be omitted from the stream. We implemented a technique to detect large precomputed chunks and we enforced that these always be cloned before writing. Unfortunately our test coverage was not perfect and there has been for a very long time now a usage pattern where if you complete a boundary in one flush and then complete a boundary that has stylehsheet dependencies in another flush you can get a large precomputed chunk that was not being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first place and I think it was a mistake. It relies on a dev only check to catch paired with potentially version specific order of operations on the streaming side. This is too unreliable. Additionally the low limit of view size for Edge is not used in Node.js but there is not real justification for this.

In this change I updated the view size for edge streaming to match Node at 2048 bytes which is still relatively small and we have no data one way or another to preference 512 over this. Then I updated the assertion logic to error anytime a precomputed chunk exceeds the size. This eliminates the need to clone these chunks by just making sure our view size is always larger than the largest precomputed chunk we can possibly write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit as long as we exercise each stream config because the precomputed chunks are created in module scope. Second, we can always split up large chunks so making sure the precomptued chunk is smaller than whatever view size we actually desire is relatively trivial.

@facebook-github-bot facebook-github-bot added CLA Signed React Core Team Opened by a member of the React Core Team labels Mar 15, 2024
@react-sizebot
Copy link

react-sizebot commented Mar 15, 2024

Comparing: 966d174...76f7f70

Critical size changes

Includes critical production bundles, as well as any change greater than 2%:

Name +/- Base Current +/- gzip Base gzip Current gzip
oss-stable/react-dom/cjs/react-dom.production.min.js = 177.19 kB 177.19 kB = 55.23 kB 55.23 kB
oss-experimental/react-dom/cjs/react-dom.production.min.js = 177.72 kB 177.72 kB = 55.55 kB 55.55 kB
facebook-www/ReactDOM-prod.classic.js = 594.67 kB 594.67 kB = 105.04 kB 105.04 kB
facebook-www/ReactDOM-prod.modern.js = 577.93 kB 577.93 kB = 102.10 kB 102.10 kB
oss-experimental/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.js = 2.25 kB 2.20 kB = 0.88 kB 0.87 kB
oss-stable-semver/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.js = 2.25 kB 2.20 kB = 0.88 kB 0.87 kB
oss-stable/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.js = 2.25 kB 2.20 kB = 0.88 kB 0.87 kB
oss-experimental/react-noop-renderer/cjs/react-noop-renderer-flight-server.development.js = 2.48 kB 2.41 kB = 0.94 kB 0.92 kB
oss-stable-semver/react-noop-renderer/cjs/react-noop-renderer-flight-server.development.js = 2.48 kB 2.41 kB = 0.94 kB 0.92 kB
oss-stable/react-noop-renderer/cjs/react-noop-renderer-flight-server.development.js = 2.48 kB 2.41 kB = 0.94 kB 0.92 kB
oss-experimental/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.min.js = 1.35 kB 1.31 kB = 0.63 kB 0.62 kB
oss-stable-semver/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.min.js = 1.35 kB 1.31 kB = 0.63 kB 0.62 kB
oss-stable/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.min.js = 1.35 kB 1.31 kB = 0.63 kB 0.62 kB
test_utils/ReactAllWarnings.js Deleted 66.60 kB 0.00 kB Deleted 16.28 kB 0.00 kB

Significant size changes

Includes any change greater than 0.2%:

Expand to show
Name +/- Base Current +/- gzip Base gzip Current gzip
oss-experimental/react-server-dom-webpack/umd/react-server-dom-webpack-server.browser.development.js = 140.54 kB 140.06 kB = 31.23 kB 31.09 kB
oss-experimental/react-server-dom-turbopack/umd/react-server-dom-turbopack-server.browser.development.js = 137.97 kB 137.49 kB = 30.53 kB 30.39 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js = 134.36 kB 133.89 kB = 31.11 kB 30.98 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.browser.development.js = 133.31 kB 132.84 kB = 30.78 kB 30.64 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js = 139.26 kB 138.77 kB = 32.08 kB 31.93 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.edge.development.js = 132.43 kB 131.96 kB = 30.60 kB 30.46 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.browser.development.js = 130.86 kB 130.39 kB = 30.07 kB 29.93 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.development.js = 136.98 kB 136.49 kB = 31.45 kB 31.29 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.unbundled.development.js = 136.63 kB 136.14 kB = 31.23 kB 31.08 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.unbundled.development.js = 134.34 kB 133.85 kB = 30.64 kB 30.48 kB
oss-experimental/react-server-dom-esm/cjs/react-server-dom-esm-server.node.development.js = 128.18 kB 127.69 kB = 29.19 kB 29.03 kB
oss-stable-semver/react-server-dom-webpack/umd/react-server-dom-webpack-server.browser.development.js = 118.15 kB 117.67 kB = 26.96 kB 26.82 kB
oss-stable/react-server-dom-webpack/umd/react-server-dom-webpack-server.browser.development.js = 118.15 kB 117.67 kB = 26.96 kB 26.82 kB
oss-stable-semver/react-server-dom-turbopack/umd/react-server-dom-turbopack-server.browser.development.js = 115.58 kB 115.10 kB = 26.28 kB 26.13 kB
oss-stable/react-server-dom-turbopack/umd/react-server-dom-turbopack-server.browser.development.js = 115.58 kB 115.10 kB = 26.28 kB 26.13 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js = 112.30 kB 111.84 kB = 26.71 kB 26.57 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js = 112.30 kB 111.84 kB = 26.71 kB 26.57 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js = 117.68 kB 117.19 kB = 27.81 kB 27.66 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js = 117.68 kB 117.19 kB = 27.81 kB 27.66 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.browser.development.js = 111.93 kB 111.46 kB = 26.64 kB 26.50 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.browser.development.js = 111.93 kB 111.46 kB = 26.64 kB 26.50 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.edge.development.js = 110.37 kB 109.91 kB = 26.21 kB 26.06 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.edge.development.js = 110.37 kB 109.91 kB = 26.21 kB 26.06 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.development.js = 115.40 kB 114.90 kB = 27.20 kB 27.05 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.development.js = 115.40 kB 114.90 kB = 27.20 kB 27.05 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.browser.development.js = 109.48 kB 109.01 kB = 25.97 kB 25.82 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.browser.development.js = 109.48 kB 109.01 kB = 25.97 kB 25.82 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.unbundled.development.js = 115.05 kB 114.55 kB = 26.93 kB 26.78 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.unbundled.development.js = 115.05 kB 114.55 kB = 26.93 kB 26.78 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.unbundled.development.js = 112.76 kB 112.26 kB = 26.37 kB 26.21 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.unbundled.development.js = 112.76 kB 112.26 kB = 26.37 kB 26.21 kB
oss-stable-semver/react-server-dom-esm/cjs/react-server-dom-esm-server.node.development.js = 106.60 kB 106.11 kB = 24.99 kB 24.83 kB
oss-stable/react-server-dom-esm/cjs/react-server-dom-esm-server.node.development.js = 106.60 kB 106.11 kB = 24.99 kB 24.83 kB
oss-experimental/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.js = 2.25 kB 2.20 kB = 0.88 kB 0.87 kB
oss-stable-semver/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.js = 2.25 kB 2.20 kB = 0.88 kB 0.87 kB
oss-stable/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.js = 2.25 kB 2.20 kB = 0.88 kB 0.87 kB
oss-experimental/react-noop-renderer/cjs/react-noop-renderer-flight-server.development.js = 2.48 kB 2.41 kB = 0.94 kB 0.92 kB
oss-stable-semver/react-noop-renderer/cjs/react-noop-renderer-flight-server.development.js = 2.48 kB 2.41 kB = 0.94 kB 0.92 kB
oss-stable/react-noop-renderer/cjs/react-noop-renderer-flight-server.development.js = 2.48 kB 2.41 kB = 0.94 kB 0.92 kB
oss-experimental/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.min.js = 1.35 kB 1.31 kB = 0.63 kB 0.62 kB
oss-stable-semver/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.min.js = 1.35 kB 1.31 kB = 0.63 kB 0.62 kB
oss-stable/react-noop-renderer/cjs/react-noop-renderer-flight-server.production.min.js = 1.35 kB 1.31 kB = 0.63 kB 0.62 kB
test_utils/ReactAllWarnings.js Deleted 66.60 kB 0.00 kB Deleted 16.28 kB 0.00 kB

Generated by 🚫 dangerJS against 76f7f70

…was assumed to be produced by the render and thus was safe to stream which results in transferring the underlying object memory. Later we ran into an issue where a precomputed chunk grew large enough to trigger this hueristic and it started causing renders to fail because once a second render had occured the precomputed chunk would not have an underlying buffer of bytes to send and these bytes would be omitted from the stream. We implemented a technique to detect large preocmputed chunks and we enforced that these always be cloned before writing. Unfortunately our test coverage was not perfect and there has been for a very long time now a usage pattern where if you complete a boundary in one flush and then complete a boundary that has stylehsheet dependencies in another flush you can get a large precomputed chunk that was not being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first place and I think it was a mistake. It relies on a dev only check to catch paired with potentially version speicifc order of operations on the streaming side. This is too unreliable. Additionally the low limit of view size for Edge is not used in Node.js but there is not real justification for this.

In this change I updated the view size for edge streaming to match Node at 2048 bytes which is still relatively small and we have no data one way or another to preference 512 over this. Then I updated the assertion logic to error anytime a precomputed chunk exceeds the size. This eliminates the need to clone these chunks by just making sure our view size is awlays larger than the largest precomputed chunk we can possibly write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit as long as we exercise each stream config because the precomputed chunks are created in module scope.
Second, we can always split up large chunks so making sure the precomptued chunk is smaller than whatever view size we actually desire is relatively trivial.
Copy link
Collaborator

@sebmarkbage sebmarkbage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well we do know that we flush much more often than a typical streaming service would so whatever the default is for others doesn’t necessarily make sense for us anyway.

The risk is that with many small flushes we eat up a ton of unnecessary memory.

Copy link
Collaborator

@sebmarkbage sebmarkbage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the really large chunks we could just split up the scripts. (Although they’re getting a bit too large already.)

@gnoff
Copy link
Collaborator Author

gnoff commented Mar 16, 2024

Yeah I suppose edge has implied tighter memory constraints. Do you think the transferred buffers are likely to not be freed quickly?

but yes I came to the same conclusion that it’s trivial for us to split up our large chunks to make them smaller and that is less bad than accidentally failing to assert the copy like what happened in the case I’m fixing

@gnoff gnoff merged commit b09e102 into facebook:main Mar 16, 2024
38 checks passed
@gnoff gnoff deleted the fix-large-chunk branch March 16, 2024 19:39
github-actions bot pushed a commit that referenced this pull request Mar 16, 2024
…ender-time assertions (#28568)

A while back we implemented a heuristic that if a chunk was large it was
assumed to be produced by the render and thus was safe to stream which
results in transferring the underlying object memory. Later we ran into
an issue where a precomputed chunk grew large enough to trigger this
hueristic and it started causing renders to fail because once a second
render had occurred the precomputed chunk would not have an underlying
buffer of bytes to send and these bytes would be omitted from the
stream. We implemented a technique to detect large precomputed chunks
and we enforced that these always be cloned before writing.
Unfortunately our test coverage was not perfect and there has been for a
very long time now a usage pattern where if you complete a boundary in
one flush and then complete a boundary that has stylehsheet dependencies
in another flush you can get a large precomputed chunk that was not
being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first
place and I think it was a mistake. It relies on a dev only check to
catch paired with potentially version specific order of operations on
the streaming side. This is too unreliable. Additionally the low limit
of view size for Edge is not used in Node.js but there is not real
justification for this.

In this change I updated the view size for edge streaming to match Node
at 2048 bytes which is still relatively small and we have no data one
way or another to preference 512 over this. Then I updated the assertion
logic to error anytime a precomputed chunk exceeds the size. This
eliminates the need to clone these chunks by just making sure our view
size is always larger than the largest precomputed chunk we can possibly
write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit
as long as we exercise each stream config because the precomputed chunks
are created in module scope. Second, we can always split up large chunks
so making sure the precomptued chunk is smaller than whatever view size
we actually desire is relatively trivial.

DiffTrain build for [b09e102](b09e102)
gnoff added a commit to gnoff/react that referenced this pull request Mar 18, 2024
…ender-time assertions (facebook#28568)

A while back we implemented a heuristic that if a chunk was large it was
assumed to be produced by the render and thus was safe to stream which
results in transferring the underlying object memory. Later we ran into
an issue where a precomputed chunk grew large enough to trigger this
hueristic and it started causing renders to fail because once a second
render had occurred the precomputed chunk would not have an underlying
buffer of bytes to send and these bytes would be omitted from the
stream. We implemented a technique to detect large precomputed chunks
and we enforced that these always be cloned before writing.
Unfortunately our test coverage was not perfect and there has been for a
very long time now a usage pattern where if you complete a boundary in
one flush and then complete a boundary that has stylehsheet dependencies
in another flush you can get a large precomputed chunk that was not
being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first
place and I think it was a mistake. It relies on a dev only check to
catch paired with potentially version specific order of operations on
the streaming side. This is too unreliable. Additionally the low limit
of view size for Edge is not used in Node.js but there is not real
justification for this.

In this change I updated the view size for edge streaming to match Node
at 2048 bytes which is still relatively small and we have no data one
way or another to preference 512 over this. Then I updated the assertion
logic to error anytime a precomputed chunk exceeds the size. This
eliminates the need to clone these chunks by just making sure our view
size is always larger than the largest precomputed chunk we can possibly
write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit
as long as we exercise each stream config because the precomputed chunks
are created in module scope. Second, we can always split up large chunks
so making sure the precomptued chunk is smaller than whatever view size
we actually desire is relatively trivial.
gnoff added a commit that referenced this pull request Mar 18, 2024
… without relying on render-time assertions (#28580)

(cherrypick b09e102 #28568)

[Fizz] Prevent uncloned large precomputed chunks without relying on
render-time assertions (#28568)

A while back we implemented a heuristic that if a chunk was large it was
assumed to be produced by the render and thus was safe to stream which
results in transferring the underlying object memory. Later we ran into
an issue where a precomputed chunk grew large enough to trigger this
hueristic and it started causing renders to fail because once a second
render had occurred the precomputed chunk would not have an underlying
buffer of bytes to send and these bytes would be omitted from the
stream. We implemented a technique to detect large precomputed chunks
and we enforced that these always be cloned before writing.
Unfortunately our test coverage was not perfect and there has been for a
very long time now a usage pattern where if you complete a boundary in
one flush and then complete a boundary that has stylehsheet dependencies
in another flush you can get a large precomputed chunk that was not
being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first
place and I think it was a mistake. It relies on a dev only check to
catch paired with potentially version specific order of operations on
the streaming side. This is too unreliable. Additionally the low limit
of view size for Edge is not used in Node.js but there is not real
justification for this.

In this change I updated the view size for edge streaming to match Node
at 2048 bytes which is still relatively small and we have no data one
way or another to preference 512 over this. Then I updated the assertion
logic to error anytime a precomputed chunk exceeds the size. This
eliminates the need to clone these chunks by just making sure our view
size is always larger than the largest precomputed chunk we can possibly
write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit
as long as we exercise each stream config because the precomputed chunks
are created in module scope. Second, we can always split up large chunks
so making sure the precomptued chunk is smaller than whatever view size
we actually desire is relatively trivial.
gnoff added a commit to gnoff/react that referenced this pull request Mar 19, 2024
…ender-time assertions (facebook#28568)

A while back we implemented a heuristic that if a chunk was large it was
assumed to be produced by the render and thus was safe to stream which
results in transferring the underlying object memory. Later we ran into
an issue where a precomputed chunk grew large enough to trigger this
hueristic and it started causing renders to fail because once a second
render had occurred the precomputed chunk would not have an underlying
buffer of bytes to send and these bytes would be omitted from the
stream. We implemented a technique to detect large precomputed chunks
and we enforced that these always be cloned before writing.
Unfortunately our test coverage was not perfect and there has been for a
very long time now a usage pattern where if you complete a boundary in
one flush and then complete a boundary that has stylehsheet dependencies
in another flush you can get a large precomputed chunk that was not
being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first
place and I think it was a mistake. It relies on a dev only check to
catch paired with potentially version specific order of operations on
the streaming side. This is too unreliable. Additionally the low limit
of view size for Edge is not used in Node.js but there is not real
justification for this.

In this change I updated the view size for edge streaming to match Node
at 2048 bytes which is still relatively small and we have no data one
way or another to preference 512 over this. Then I updated the assertion
logic to error anytime a precomputed chunk exceeds the size. This
eliminates the need to clone these chunks by just making sure our view
size is always larger than the largest precomputed chunk we can possibly
write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit
as long as we exercise each stream config because the precomputed chunks
are created in module scope. Second, we can always split up large chunks
so making sure the precomptued chunk is smaller than whatever view size
we actually desire is relatively trivial.
gnoff added a commit that referenced this pull request Mar 19, 2024
…s without relying on render-time assertions #28580  (#28585)

(cherrypick b09e102 on 60a927d)

[Fizz] Prevent uncloned large precomputed chunks without relying on
render-time assertions (#28568)

A while back we implemented a heuristic that if a chunk was large it was
assumed to be produced by the render and thus was safe to stream which
results in transferring the underlying object memory. Later we ran into
an issue where a precomputed chunk grew large enough to trigger this
hueristic and it started causing renders to fail because once a second
render had occurred the precomputed chunk would not have an underlying
buffer of bytes to send and these bytes would be omitted from the
stream. We implemented a technique to detect large precomputed chunks
and we enforced that these always be cloned before writing.
Unfortunately our test coverage was not perfect and there has been for a
very long time now a usage pattern where if you complete a boundary in
one flush and then complete a boundary that has stylehsheet dependencies
in another flush you can get a large precomputed chunk that was not
being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first
place and I think it was a mistake. It relies on a dev only check to
catch paired with potentially version specific order of operations on
the streaming side. This is too unreliable. Additionally the low limit
of view size for Edge is not used in Node.js but there is not real
justification for this.

In this change I updated the view size for edge streaming to match Node
at 2048 bytes which is still relatively small and we have no data one
way or another to preference 512 over this. Then I updated the assertion
logic to error anytime a precomputed chunk exceeds the size. This
eliminates the need to clone these chunks by just making sure our view
size is always larger than the largest precomputed chunk we can possibly
write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit
as long as we exercise each stream config because the precomputed chunks
are created in module scope. Second, we can always split up large chunks
so making sure the precomptued chunk is smaller than whatever view size
we actually desire is relatively trivial.
gnoff added a commit to gnoff/next.js that referenced this pull request Mar 25, 2024
- facebook/react#28596
- facebook/react#28625
- facebook/react#28616
- facebook/react#28491
- facebook/react#28583
- facebook/react#28427
- facebook/react#28613
- facebook/react#28599
- facebook/react#28611
- facebook/react#28610
- facebook/react#28606
- facebook/react#28598
- facebook/react#28549
- facebook/react#28557
- facebook/react#28467
- facebook/react#28591
- facebook/react#28459
- facebook/react#28590
- facebook/react#28564
- facebook/react#28582
- facebook/react#28579
- facebook/react#28578
- facebook/react#28521
- facebook/react#28550
- facebook/react#28576
- facebook/react#28577
- facebook/react#28571
- facebook/react#28572
- facebook/react#28560
- facebook/react#28569
- facebook/react#28573
- facebook/react#28546
- facebook/react#28568
- facebook/react#28562
- facebook/react#28566
- facebook/react#28565
- facebook/react#28559
- facebook/react#28508
- facebook/react#20432
- facebook/react#28555
- facebook/react#24730
- facebook/react#28472
- facebook/react#27991
- facebook/react#28514
- facebook/react#28548
- facebook/react#28526
- facebook/react#28515
- facebook/react#28533
- facebook/react#28532
- facebook/react#28531
- facebook/react#28407
- facebook/react#28522
- facebook/react#28538
- facebook/react#28509
- facebook/react#28534
- facebook/react#28527
- facebook/react#28528
- facebook/react#28519
- facebook/react#28411
- facebook/react#28520
- facebook/react#28518
- facebook/react#28493
- facebook/react#28504
- facebook/react#28499
- facebook/react#28501
- facebook/react#28496
- facebook/react#28471
- facebook/react#28351
- facebook/react#28486
- facebook/react#28490
- facebook/react#28488
- facebook/react#28468
- facebook/react#28321
- facebook/react#28477
- facebook/react#28479
- facebook/react#28480
- facebook/react#28478
- facebook/react#28464
- facebook/react#28475
- facebook/react#28456
- facebook/react#28319
- facebook/react#28345
- facebook/react#28337
- facebook/react#28335
- facebook/react#28466
- facebook/react#28462
- facebook/react#28322
- facebook/react#28444
- facebook/react#28448
- facebook/react#28449
- facebook/react#28446
- facebook/react#28447
- facebook/react#24580
- facebook/react#28514
- facebook/react#28548
- facebook/react#28526
- facebook/react#28515
- facebook/react#28533
- facebook/react#28532
- facebook/react#28531
- facebook/react#28407
- facebook/react#28522
- facebook/react#28538
- facebook/react#28509
- facebook/react#28534
- facebook/react#28527
- facebook/react#28528
- facebook/react#28519
- facebook/react#28411
- facebook/react#28520
- facebook/react#28518
- facebook/react#28493
- facebook/react#28504
- facebook/react#28499
- facebook/react#28501
- facebook/react#28496
- facebook/react#28471
- facebook/react#28351
- facebook/react#28486
- facebook/react#28490
- facebook/react#28488
- facebook/react#28468
- facebook/react#28321
- facebook/react#28477
- facebook/react#28479
- facebook/react#28480
- facebook/react#28478
- facebook/react#28464
- facebook/react#28475
- facebook/react#28456
- facebook/react#28319
- facebook/react#28345
- facebook/react#28337
- facebook/react#28335
- facebook/react#28466
- facebook/react#28462
- facebook/react#28322
- facebook/react#28444
- facebook/react#28448
- facebook/react#28449
- facebook/react#28446
- facebook/react#28447
- facebook/react#24580
gnoff added a commit to gnoff/next.js that referenced this pull request Mar 25, 2024
- facebook/react#28596
- facebook/react#28625
- facebook/react#28616
- facebook/react#28491
- facebook/react#28583
- facebook/react#28427
- facebook/react#28613
- facebook/react#28599
- facebook/react#28611
- facebook/react#28610
- facebook/react#28606
- facebook/react#28598
- facebook/react#28549
- facebook/react#28557
- facebook/react#28467
- facebook/react#28591
- facebook/react#28459
- facebook/react#28590
- facebook/react#28564
- facebook/react#28582
- facebook/react#28579
- facebook/react#28578
- facebook/react#28521
- facebook/react#28550
- facebook/react#28576
- facebook/react#28577
- facebook/react#28571
- facebook/react#28572
- facebook/react#28560
- facebook/react#28569
- facebook/react#28573
- facebook/react#28546
- facebook/react#28568
- facebook/react#28562
- facebook/react#28566
- facebook/react#28565
- facebook/react#28559
- facebook/react#28508
- facebook/react#20432
- facebook/react#28555
- facebook/react#24730
- facebook/react#28472
- facebook/react#27991
- facebook/react#28514
- facebook/react#28548
- facebook/react#28526
- facebook/react#28515
- facebook/react#28533
- facebook/react#28532
- facebook/react#28531
- facebook/react#28407
- facebook/react#28522
- facebook/react#28538
- facebook/react#28509
- facebook/react#28534
- facebook/react#28527
- facebook/react#28528
- facebook/react#28519
- facebook/react#28411
- facebook/react#28520
- facebook/react#28518
- facebook/react#28493
- facebook/react#28504
- facebook/react#28499
- facebook/react#28501
- facebook/react#28496
- facebook/react#28471
- facebook/react#28351
- facebook/react#28486
- facebook/react#28490
- facebook/react#28488
- facebook/react#28468
- facebook/react#28321
- facebook/react#28477
- facebook/react#28479
- facebook/react#28480
- facebook/react#28478
- facebook/react#28464
- facebook/react#28475
- facebook/react#28456
- facebook/react#28319
- facebook/react#28345
- facebook/react#28337
- facebook/react#28335
- facebook/react#28466
- facebook/react#28462
- facebook/react#28322
- facebook/react#28444
- facebook/react#28448
- facebook/react#28449
- facebook/react#28446
- facebook/react#28447
- facebook/react#24580
EdisonVan pushed a commit to EdisonVan/react that referenced this pull request Apr 15, 2024
…ender-time assertions (facebook#28568)

A while back we implemented a heuristic that if a chunk was large it was
assumed to be produced by the render and thus was safe to stream which
results in transferring the underlying object memory. Later we ran into
an issue where a precomputed chunk grew large enough to trigger this
hueristic and it started causing renders to fail because once a second
render had occurred the precomputed chunk would not have an underlying
buffer of bytes to send and these bytes would be omitted from the
stream. We implemented a technique to detect large precomputed chunks
and we enforced that these always be cloned before writing.
Unfortunately our test coverage was not perfect and there has been for a
very long time now a usage pattern where if you complete a boundary in
one flush and then complete a boundary that has stylehsheet dependencies
in another flush you can get a large precomputed chunk that was not
being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first
place and I think it was a mistake. It relies on a dev only check to
catch paired with potentially version specific order of operations on
the streaming side. This is too unreliable. Additionally the low limit
of view size for Edge is not used in Node.js but there is not real
justification for this.

In this change I updated the view size for edge streaming to match Node
at 2048 bytes which is still relatively small and we have no data one
way or another to preference 512 over this. Then I updated the assertion
logic to error anytime a precomputed chunk exceeds the size. This
eliminates the need to clone these chunks by just making sure our view
size is always larger than the largest precomputed chunk we can possibly
write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit
as long as we exercise each stream config because the precomputed chunks
are created in module scope. Second, we can always split up large chunks
so making sure the precomptued chunk is smaller than whatever view size
we actually desire is relatively trivial.
bigfootjon pushed a commit that referenced this pull request Apr 18, 2024
…ender-time assertions (#28568)

A while back we implemented a heuristic that if a chunk was large it was
assumed to be produced by the render and thus was safe to stream which
results in transferring the underlying object memory. Later we ran into
an issue where a precomputed chunk grew large enough to trigger this
hueristic and it started causing renders to fail because once a second
render had occurred the precomputed chunk would not have an underlying
buffer of bytes to send and these bytes would be omitted from the
stream. We implemented a technique to detect large precomputed chunks
and we enforced that these always be cloned before writing.
Unfortunately our test coverage was not perfect and there has been for a
very long time now a usage pattern where if you complete a boundary in
one flush and then complete a boundary that has stylehsheet dependencies
in another flush you can get a large precomputed chunk that was not
being cloned to be sent twice causing streaming errors.

I've thought about why we even went with this solution in the first
place and I think it was a mistake. It relies on a dev only check to
catch paired with potentially version specific order of operations on
the streaming side. This is too unreliable. Additionally the low limit
of view size for Edge is not used in Node.js but there is not real
justification for this.

In this change I updated the view size for edge streaming to match Node
at 2048 bytes which is still relatively small and we have no data one
way or another to preference 512 over this. Then I updated the assertion
logic to error anytime a precomputed chunk exceeds the size. This
eliminates the need to clone these chunks by just making sure our view
size is always larger than the largest precomputed chunk we can possibly
write. I'm generally in favor of this for a few reasons.

First, we'll always know during testing whether we've violated the limit
as long as we exercise each stream config because the precomputed chunks
are created in module scope. Second, we can always split up large chunks
so making sure the precomptued chunk is smaller than whatever view size
we actually desire is relatively trivial.

DiffTrain build for commit b09e102.
eps1lon added a commit to vercel/next.js that referenced this pull request Apr 19, 2024
### React upstream changes

- facebook/react#28643
- facebook/react#28628
- facebook/react#28361
- facebook/react#28513
- facebook/react#28299
- facebook/react#28617
- facebook/react#28618
- facebook/react#28621
- facebook/react#28614
- facebook/react#28596
- facebook/react#28625
- facebook/react#28616
- facebook/react#28491
- facebook/react#28583
- facebook/react#28427
- facebook/react#28613
- facebook/react#28599
- facebook/react#28611
- facebook/react#28610
- facebook/react#28606
- facebook/react#28598
- facebook/react#28549
- facebook/react#28557
- facebook/react#28467
- facebook/react#28591
- facebook/react#28459
- facebook/react#28590
- facebook/react#28564
- facebook/react#28582
- facebook/react#28579
- facebook/react#28578
- facebook/react#28521
- facebook/react#28550
- facebook/react#28576
- facebook/react#28577
- facebook/react#28571
- facebook/react#28572
- facebook/react#28560
- facebook/react#28569
- facebook/react#28573
- facebook/react#28546
- facebook/react#28568
- facebook/react#28562
- facebook/react#28566
- facebook/react#28565
- facebook/react#28559
- facebook/react#28508
- facebook/react#20432
- facebook/react#28555
- facebook/react#24730
- facebook/react#28472
- facebook/react#27991
- facebook/react#28514
- facebook/react#28548
- facebook/react#28526
- facebook/react#28515
- facebook/react#28533
- facebook/react#28532
- facebook/react#28531
- facebook/react#28407
- facebook/react#28522
- facebook/react#28538
- facebook/react#28509
- facebook/react#28534
- facebook/react#28527
- facebook/react#28528
- facebook/react#28519
- facebook/react#28411
- facebook/react#28520
- facebook/react#28518
- facebook/react#28493
- facebook/react#28504
- facebook/react#28499
- facebook/react#28501
- facebook/react#28496
- facebook/react#28471
- facebook/react#28351
- facebook/react#28486
- facebook/react#28490
- facebook/react#28488
- facebook/react#28468
- facebook/react#28321
- facebook/react#28477
- facebook/react#28479
- facebook/react#28480
- facebook/react#28478
- facebook/react#28464
- facebook/react#28475
- facebook/react#28456
- facebook/react#28319
- facebook/react#28345
- facebook/react#28337
- facebook/react#28335
- facebook/react#28466
- facebook/react#28462
- facebook/react#28322
- facebook/react#28444
- facebook/react#28448
- facebook/react#28449
- facebook/react#28446
- facebook/react#28447
- facebook/react#24580
- facebook/react#28514
- facebook/react#28548
- facebook/react#28526
- facebook/react#28515
- facebook/react#28533
- facebook/react#28532
- facebook/react#28531
- facebook/react#28407
- facebook/react#28522
- facebook/react#28538
- facebook/react#28509
- facebook/react#28534
- facebook/react#28527
- facebook/react#28528
- facebook/react#28519
- facebook/react#28411
- facebook/react#28520
- facebook/react#28518
- facebook/react#28493
- facebook/react#28504
- facebook/react#28499
- facebook/react#28501
- facebook/react#28496
- facebook/react#28471
- facebook/react#28351
- facebook/react#28486
- facebook/react#28490
- facebook/react#28488
- facebook/react#28468
- facebook/react#28321
- facebook/react#28477
- facebook/react#28479
- facebook/react#28480
- facebook/react#28478
- facebook/react#28464
- facebook/react#28475
- facebook/react#28456
- facebook/react#28319
- facebook/react#28345
- facebook/react#28337
- facebook/react#28335
- facebook/react#28466
- facebook/react#28462
- facebook/react#28322
- facebook/react#28444
- facebook/react#28448
- facebook/react#28449
- facebook/react#28446
- facebook/react#28447
- facebook/react#24580
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed React Core Team Opened by a member of the React Core Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants