-
Notifications
You must be signed in to change notification settings - Fork 551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom timeout per request #165
Comments
I'm not quite sure how that would work. The current timeout is on the socket. So you could never have a per request timeout that is larger than the socket timeout. Allowing to change the socket timeout per requests makes things complicated, especially with pipelining. I guess we could add it per request as long as it's less than the socket timeout. But it would incur a performance penalty when used. Also does it apply on the request body (i.e read) or just socket activity (i.e. write/read)? What about when the request is delayed by other requests ahead in the pipeline?
Because it's complicated with pipelining 😄 |
I think we are talking about different things. The timeout on the socket is an activity timeout, e.g. the timeout between two chunk of data. I think @delvedor is asking about a request timeout.
It can be bigger than the socket timeout as a single request is composed of multiple chunks. Essentially it's a matter of creating a I agree this will add overhead when used. However this is a common feature which would have similar overhead both here and externally, so if it does not cost us overhead when it is disabled, I would be ok in adding it.
|
Ah, so it's a timeout for the entire request regardless of activity. @delvedor is this what you mean? Also please note that this is different from e.g. |
Correct :) |
That's contradictory, the timeout argument for There are two things here:
|
Essentially I would like to achieve something like this: 'use strict'
const http = require('http')
const server = http.createServer(handler)
server.listen(0, () => {
const request = http.request(
`http://localhost:${server.address().port}`,
{ timeout: 500 }
)
request.on('response', response => {
console.log(response.statusCode)
})
request.on('timeout', () => {
request.abort()
console.log('timeout')
})
request.on('abort', () => {
console.log('abort')
})
request.on('error', err => {
console.log(err)
})
request.end()
})
function handler (req, res) {
setTimeout(() => {
res.end('ok')
}, 1000)
} The log is:
The error is caused by |
That's not enough to determine how we implement this. Notice that undici is fundamentally different due to pipelining so that example cannot be directly applied. So how would you expect that timeout to work? e.g. you might have requests that is being served ahead in the pipeline. Should the waiting time for other responses ahead in the pipeline count towards timeout or not? client.request({ ...opts }, (err) => {
// This is a large requests downloading lots of data.
})
client.request({ ...opts, timeout: 100 }, (err) => {
// Should this timeout if there is no activity on this specific request
// since it's waiting for the previous large request.
// Or if there is no activity on the socket.
// Do we start the timeout from the moment this request is at the head of
// the pipeline and then measure the socket activity?
}) |
That's a good question. As an external user, I don't really care about why the timeout has been reached, I would expect that if set a timeout of 1000 milliseconds, once the timeout has passed, I get an error back, even if the request has never been sent. |
So your expectation would be that if the request has not completed within the timeout time counted from when the request was created, it should timeout. So essentially it's the same as: const timeout = setTimeout(() => {
throw new Error('timeout')
}
const { body } = await client.request(...opts)
finished(body, () => {
clearTimeout(timeout)
}) Note that this is different than what |
I think it is what a user without the knowledge of the internals would expect. |
I think that fits @mcollina's suggestion and should be rather simple to implement.
It just invokes My only hesitations here is that it works differently than |
Oooh, now I've understood what you mean. We can't use the same approach here because if we are using pipelining > 1, then there might be more than one request per socket, making the timeout unreliable. What do you think about optimizing this case, and using |
|
Oks! So if we would like to compare Node.js core http.request and undici, we can summarize it like this:
Did I get this right? |
I don't really think they can be compared in a meaningful way. It's tricky. 😕 |
Suggestion on what we can do:
Which is kind of a hybrid but it makes sense to me at least. |
I would simply do
So we can offer a strong guarantee to our users. |
This has been added |
At the moment we can configure a global timeout, but not a custom timeout per request, which would be quite handy.
Is there a reason for this?
const { Client } = require('undici') const client = new Client(`http://localhost:3000`) client.request({ path: '/', method: 'GET', + timeout: 10000 }, function (err, data) { // handle response })
Related: #160
The text was updated successfully, but these errors were encountered: