-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve "The script will never generate a response." debuggability #210
Comments
In ardatan/graphql-mesh#4201, the issue is even harder to find, because the library is internally using a global variable. |
Eventually we want to solve this kind of problem by having every event run in a separate global scope, but that will require some deep changes to V8 to be able to do in a way that performs well, and we don't know exactly how soon we'll be able to build it. Before we do that, we'll introduce alternative API that people can use for explicit cross-event caching when desired. In the shorter term, this is pretty hard for us to solve, because we don't really have enough visibility into the V8 microtask loop to know that the request has decided to wait on a promise that originates from another concurrent request. All we know is that the request has no further I/O or timers scheduled yet hasn't returned a result, therefore it seems permanently hung. Maybe when we implement AsyncLocalContext-like support, we could leverage that? Not sure. |
Is there any progress on this? I'm get this error consistently starting with this commit. |
For reference, I think the following errors fall under the same category:
(there's a stack trace, at least making it possible to debug) and
|
I'm here just to add I'm struggling with this error as well. I do try to cache async data between events, as I expected this to be totally possible. For what I understand the worker code tracks for every Promise which event the promise belongs to. Resolving events from one request in another request is not supported. This should be mentioned in the documentation imho. |
We're having this problem too. I have an endpoint which outputs data from my DB (via Hyperdrive). But if I call the same endpoint very soon after the first call, I get this error. The error is plainly false since it says the script "will never generate a response", even though it did the first time, just not the second. |
@PANstudio in the main "fetch" event listener, have a look at FetchEvent.waitUntil. Make sure all side effects that you need to run and complete are registered in the waitUntil listener, otherwise all pending promises will (silently) be aborted. |
Anyone know how to solve this issue when using web sockets? Here is an example https://github.com/grgbkr/cf-websocket-msg-limit |
Per @kentonv's earlier response, changing this behavior is difficult as we do not currently have the visibility into the microtask queue to know which promise is actually being waited on to generate the response, not to mention knowing whether there is anything still around that can resolve that promise. We do have a mechanism in place now to allow waiting on promises across requests but it's still entirely possible to store a promise at global scope that will never get resolved. Also consider that this really isn't tied to the cross-request waiting issue. Consider the following example: export default {
fetch(req) {
const { promise, resolve } = Promise.withResolvers();
return promise;
}
} In this case we should also get the When the call to I guess my key question back to those of you that are dealing with debugging this issue is: What information could we provide that would make this easier to debug? Is it knowing which promises are being waited on for actually generating the response? If that's the case, it's not trivial because a single request can potentially generate thousands of promises, any one of which could end up being awaited... and by the time it is awaited we've lost the context of where the promise was created or what created it. v8 provides a "promise hooks" API that would allow us to track all promises created but it comes with a fairly steep performance penalty that would mean we couldn't keep it always on. |
@jasnell what if we had a special mode where we collect stacktrace for every instantiated promise and store it in the promise property? We could theoretically follow the promise chain back from c++ to the promise object and dump the stack. Of course performance would be horrible, but it is better than nothing? wdyt? |
@PoisnFang Your specific case is a bug which I believe will be improved by #804. However, note that the bug is only that the error message is wrong -- the error message should be saying that the message was too large. We do in fact have a 1MB limit on WebSocket messages received by a Worker. |
@kentonv Sorry, no, that's not what my use case is. (Mine has nothing to do with the size of the message) |
@mikea ...
Definitely possible using the v8 promise hooks API but absolutely only with a significant perf cost. If we went with that mode I think we'd absolutely need to strictly limit it to explicit opt-in in |
@PoisnFang The example you linked to fails with this error specifically when it receives a message >1MB. If you are always seeing the error regardless of message size, please post your code that demonstrates this. |
@jasnell Is there anything we need to do to "opt in" to this functionality? As mentioned above by several others, we try to do cross-request caching of expensive async calls, but we have been seeing odd spikes in |
There's nothing you need to opt into but it still won't completely solve every issue with this. For example, take the following contrived example: export default {
async fetch() {
if (globalThis.promise === undefined) {
const { promise, resolver } = Promise.withResolvers();
globalThis.promise = promise;
} else {
await globalThis.promise;
}
return new Response("hello");
}
} You will end up with an error in the second request because there's nothing remaining that is going to actually resolve the promise that is being waited on. Let me be clear: we do not recommend that you actually wait on promises across requests, and in the future we might end up making it impossible to share promises across global state like this, but if you do, then you need to make sure that there is something still remaining to drive the promises to completion. Without seeing your code I cannot say for certain what the issue may be but you'll want to double check to make sure all of your promises are being resolved. |
Ok, just a status update... did some digging and improving this is definitely going to be a challenge... Take the following example: export default {
async function fetch() {
const { promise } = Promise.withResolver();
await promise;
return new Response("ok");
}
}; In this example, the promise that is not being resolved leading to the "The script will never generate a response" error/warning is obviously the We could implement a mechanism with the appropriate book keeping that might provide the information we need but it would depend on the Oh So Slow V8 Promise Hooks API and wouldn't necessarily be guaranteed to work in every case... simply because what we really need to know to make this more debuggable is to know where the promise is supposed to be resolved or rejected, not where it was created or waited on. The closest the promise hook API would allow us to determine is the stack where a promise was created and the stack where it was waited on... and even then we don't have visibility to selectively capture that information only for specific promises so we'd have to capture it for all of them... which is sllloooowwww. Anyway, I haven't given up. I still want to try to find some way of improving this, but so far it's proving to be quite difficult. |
@jasnell FWIW in any other runtime, your example would simply hang. Which is also difficult to debug. But I guess V8 hasn't seen fit to develop a way to figure out which promise is causing the hang. The Workers Runtime is actually being a little bit nicer than other runtimes here, by at least failing early, rather than make the user sit and wait for a while before they conclude the request is hung... |
I've spent all day trying to diagnmose what the actual problem is and getting this message. "The script will never generate a response." I still have no idea where to even start. Judging from the thread above there's simply no way to provide any useful information about what is wrong at all or how to fix it. |
In my experience that's not the case. My app works great on Vercel and locally however the exact same codebase is giving me this error in Workers intermittently on some pages. |
Anecdotal, this error was happening to me, apparently caused by using request body with |
@L422Y Thank you! I got very deep into this issue, using the same setup you are (Nuxt/Nitro on Cloudflare, DELETE request). Switching to route parameters and using query instead of body like so;
fixed it for me as well. Thanks for the heads up, I was going insane 💚 |
I'm getting the same error as you but in my case, I'm using the mongoose. |
The
The script will never generate a response.
error is generally a very hard error to debug, the only source on the internet being this blog post.We recently had this issue thrown on rare occasions with the following code:
When heavily requested, the worker sometimes aborted with
The script will never generate a response.
. Luckily, this was the only piece of code that could possibly store promises in a global context (To solve it we calledcreateClient
on a per-request basis), but I can imagine this would be very hard to debug in a more complicated worker that isn't so careful about saving promises in a global context.Is there any way to make this error more debuggable?
The text was updated successfully, but these errors were encountered: