You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// This file configures the initialization of Sentry on the client.// The config you add here will be used whenever a users loads a page in their browser.// https://docs.sentry.io/platforms/javascript/guides/nextjs/import*asSentryfrom'@sentry/nextjs';Sentry.init({dsn: '__DSN',// Add optional integrations for additional featuresintegrations: [Sentry.replayIntegration()],// Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control.tracesSampleRate: 1,// Define how likely Replay events are sampled.// This sets the sample rate to be 10%. You may want this to be 100% while// in development and sample at a lower rate in productionreplaysSessionSampleRate: 0.1,// Define how likely Replay events are sampled when an error occurs.replaysOnErrorSampleRate: 1.0,// Setting this option to true will print useful information to the console while you're setting up Sentry.debug: false,});
sentry.edge.config.ts
// This file configures the initialization of Sentry for edge features (middleware, edge routes, and so on).// The config you add here will be used whenever one of the edge features is loaded.// Note that this config is unrelated to the Vercel Edge Runtime and is also required when running locally.// https://docs.sentry.io/platforms/javascript/guides/nextjs/import*asSentryfrom'@sentry/nextjs';Sentry.init({dsn: '__DSN',// Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control.tracesSampleRate: 1,// Setting this option to true will print useful information to the console while you're setting up Sentry.debug: false,});
sentry.server.config.ts
// This file configures the initialization of Sentry on the server.// The config you add here will be used whenever the server handles a request.// https://docs.sentry.io/platforms/javascript/guides/nextjs/import*asSentryfrom'@sentry/nextjs';Sentry.init({dsn: '__DSN',// Define how likely traces are sampled. Adjust this value in production, or use tracesSampler for greater control.tracesSampleRate: 1,// Setting this option to true will print useful information to the console while you're setting up Sentry.debug: false,});
Steps to Reproduce
Hi Sentry Team,
Thank you for adding the trackFetchStreamPerformance option in #13951 and setting it to false by default to resolve #13950. However, I would still like to enable this feature in my project, so I’ve submitted PR #13967 to address the issue.
Problem Summary
In my understanding, there are several scenarios where a fetch connection ends:
The server finishes responding, closing the connection.
An error occurs during the connection, closing it.
The client ends the connection via AbortController and AbortSignal.
All ResponseReadableStreams are canceled, prompting the browser to close the connection.
While cases 1-3 can be handled automatically by underlying mechanisms (including cloned Response streams), scenario 4 requires manual handling. As noted in #13950, Sentry’s fetch implementation cannot detect when streams are canceled via cancel(), particularly in libraries like flv.js, leading to long-lived connections that don't close.
Proposed Solution
To resolve this, I overrode the cancel method in the Response body to ensure that connections are properly terminated when the original response is canceled. Below is a modified implementation of resolveResponse:
asyncfunctionresloveReader(reader: ReadableStreamDefaultReader,onFinishedResolving: ()=>void): Promise<void>{letrunning=true;while(running){try{const{ done }=awaitreader.read();running=!done;if(done){onFinishedResolving();}}catch(_){running=false;}}}exportfunctionresolveResponse(res: Response,parentRes: Response,onFinishedResolving: ()=>void): void{if(!res.body||!parentRes.body){if(res.body){res.body.cancel().catch(_=>{// noop on error});}return;}constbody=res.body;constparentBody=parentRes.body;constresponseReader=body.getReader();constoriginalCancel=parentBody.cancel.bind(parentBody)as(reason?: any)=>Promise<any>;parentBody.cancel=async(reason?: any)=>{responseReader.cancel('Cancelled by parent stream').catch(_=>{});awaitoriginalCancel(reason);};constoriginalGetReader=parentRes.body.getReader.bind(parentBody)as(options: ReadableStreamGetReaderOptions)=>ReadableStreamDefaultReader;parentBody.getReader=((opts?: any)=>{constreader=originalGetReader(opts)asReadableStreamDefaultReader;constoriginalReaderCancel=reader.cancel.bind(reader)as(reason?: any)=>Promise<any>;reader.cancel=async(reason?: any)=>{responseReader.cancel('Cancelled by parent reader').catch(_=>{});awaitoriginalReaderCancel(reason);};returnreader;})asany;resloveReader(responseReader,onFinishedResolving).finally(()=>{try{responseReader.releaseLock();body.cancel().catch(()=>{});}catch(_){}});}
Replay Integration Issue
While implementing this fix, I also noticed that enabling the replayIntegration option causes the same issue where connections cannot be closed. I traced the problem to the _tryGetResponseText function in packages/replay-internal/src/coreHandlers/util/fetchUtils.ts. This function uses response.text() to fetch response data with a 500ms timeout, but this call fails to detect the use of cancel().
I also referenced the response.text() implementation in Node.js (unfortunately, I was only able to find the Node.js implementation, but I believe the behavior should be similar in the browser environment). In Node.js, response.text() ultimately calls readAllBytes to retrieve the data, as shown below:
This implementation does not account for the 500ms timeout that we set, resulting in readAllBytes never exiting. Therefore, I modified the _tryGetResponseText implementation to manage the Response stream directly, as follows:
asyncfunction_tryGetResponseText(response: Response): Promise<string|undefined>{if(!response.body){thrownewError('Response has no body');}constreader=response.body.getReader();constdecoder=newTextDecoder();letresult='';letrunning=true;consttimeout=setTimeout(()=>{if(running){reader.cancel('Timeout while trying to read response body').catch(_=>{});}},500);try{while(running){const{ value, done }=awaitreader.read();running=!done;if(value){result+=decoder.decode(value,{stream: !done});}}}finally{clearTimeout(timeout);reader.releaseLock();}returnresult;}
Expected Result
Application can correctly exit connections for long-lived streams
Actual Result
The connection does not exit as expected.
Conclusion
With the modifications above, my application no longer experiences the issue of long-lived connections that cannot be closed. I would appreciate it if you could review this solution and provide any feedback for further improvement. Thank you!
The text was updated successfully, but these errors were encountered:
Hi @Lei-k thanks for this detailed description and for outlining the issue with replay. We'll check your pr, see if all the tests in our ci pass and then get back to you - let's continue this conversation in the PR 👍
This issue has gone three weeks without activity. In another week, I will close it.
But! If you comment or otherwise update it, I will reset the clock, and if you remove the label Waiting for: Community, I will leave it alone ... forever!
"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀
Is there an existing issue for this?
How do you use Sentry?
Self-hosted/on-premise
Which SDK are you using?
@sentry/nextjs
SDK Version
8.33.1 ~ develop (build by myself)
Framework Version
Next 14.2.13
Link to Sentry event
No response
Reproduction Example/SDK Setup
sentry.client.config.ts
sentry.edge.config.ts
sentry.server.config.ts
Steps to Reproduce
Hi Sentry Team,
Thank you for adding the
trackFetchStreamPerformance
option in #13951 and setting it to false by default to resolve #13950. However, I would still like to enable this feature in my project, so I’ve submitted PR #13967 to address the issue.Problem Summary
In my understanding, there are several scenarios where a
fetch
connection ends:AbortController
andAbortSignal
.Response
ReadableStream
s are canceled, prompting the browser to close the connection.While cases 1-3 can be handled automatically by underlying mechanisms (including cloned
Response
streams), scenario 4 requires manual handling. As noted in #13950, Sentry’sfetch
implementation cannot detect when streams are canceled viacancel()
, particularly in libraries likeflv.js
, leading to long-lived connections that don't close.Proposed Solution
To resolve this, I overrode the
cancel
method in theResponse
body to ensure that connections are properly terminated when the original response is canceled. Below is a modified implementation ofresolveResponse
:Replay Integration Issue
While implementing this fix, I also noticed that enabling the
replayIntegration
option causes the same issue where connections cannot be closed. I traced the problem to the_tryGetResponseText
function inpackages/replay-internal/src/coreHandlers/util/fetchUtils.ts
. This function usesresponse.text()
to fetch response data with a 500ms timeout, but this call fails to detect the use ofcancel()
.I also referenced the
response.text()
implementation in Node.js (unfortunately, I was only able to find the Node.js implementation, but I believe the behavior should be similar in the browser environment). In Node.js,response.text()
ultimately callsreadAllBytes
to retrieve the data, as shown below:This implementation does not account for the 500ms timeout that we set, resulting in
readAllBytes
never exiting. Therefore, I modified the_tryGetResponseText
implementation to manage theResponse
stream directly, as follows:Expected Result
Application can correctly exit connections for long-lived streams
Actual Result
The connection does not exit as expected.
Conclusion
With the modifications above, my application no longer experiences the issue of long-lived connections that cannot be closed. I would appreciate it if you could review this solution and provide any feedback for further improvement. Thank you!
The text was updated successfully, but these errors were encountered: