-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/net/http2: slow streams can potentially block other faster streams #54330
Comments
Change https://go.dev/cl/421975 mentions this issue: |
HTTP/2 has both per-connection and per-stream flow control. The behavior you're seeing is because the slow stream has consumed not just its own flow-control tokens, but the connection tokens as well. To ensure that a single stream can't consume the connection tokens, you can configure the per-stream limit to something lower than the per-connection limit. The relevant knobs are Here's your example modified to set a higher per-connection limit:
|
Thanks for the reply! That's correct, connection tokens got exhausted which stalled other streams on the same connection. While the approach you suggested definitely works, this limit is almost always easy to exhaust when the server has a high number of concurrent requests. I have an L7 proxy which forwards requests received to different destinations. When one of the destinations A is unhealthy or slow, the request bound to that dest A starts backing up in the client connection pipeline and consumes the connection flow control tokens. Since connection flow control tokens are low, other requests bound for different destinations start starving for the connection flow control tokens. The example you shared can be updated to handle a concurrent number of streams to reproduce the issue again. Surprisingly, we didn't face this issue while using gRPC-go server and after diving in I found gRPC-go HTTP2 server returns connection flow control tokens immediately on receiving Data frame. Source. |
You're describing having only per-stream flow control, with no per-connection flow control. If that's what you want, then you can set I don't understand what purpose connection-level flow control is supposed to serve if tokens are returned immediately without any pushback mechanism. Perhaps I'm missing something. |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
I'm using a Go HTTP2 (h2c) server as an Echo server, where the handler writes the request body in the response immediately for requests with the priority header
request-type=priority
otherwise delays the response by 5s. I use the HTTP2 client to dispatch two concurrent requests with a 2MB payload each, and one of them has a priority header set.What did you expect to see?
Priority response must arrive immediately. The non-priority response must arrive after a 5s delay.
According to the RFC, one stream must not block another stream on the same connection.
What did you see instead?
Both non-priority and priority responses arrived after 5s. This should not have happened as HTTP2 streams on the same connections must not interfere with each other.
Repro-
Root Cause
Go HTTP2 server returns client connection flow control bytes back only after the application handler has read the body bytes. This blocks the client from writing the data frame of streams due to a lack of flow control window when a few slow/inactive streams consume the entire window size.
The text was updated successfully, but these errors were encountered: