-
Notifications
You must be signed in to change notification settings - Fork 844
Enforce Active Connection limits #6754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
8a1d923
19ab770
326d3a6
f3c282d
4a5b06a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -926,7 +926,7 @@ bool | |
| PluginVC::add_to_active_queue() | ||
| { | ||
| // do nothing | ||
| return false; | ||
| return true; | ||
| } | ||
|
|
||
| SOCKET | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1153,6 +1153,13 @@ Http2ConnectionState::state_closed(int event, void *edata) | |
| Http2Stream * | ||
| Http2ConnectionState::create_stream(Http2StreamId new_id, Http2Error &error) | ||
| { | ||
| // first check if we've hit the active connection limit | ||
| if (!ua_session->get_netvc()->add_to_active_queue()) { | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Originally I thought this would be a problem because of multiplexing is HTTP/2 the session may already be on the active queue, but looking at add_to_active_queue, it already deals with that case.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Of course in the case of HTTP/2 it may be better if add_to_active queue does not call do_io_close and instead let the error handling in the HTTP/2 logic send a useful error and close the connection.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And looking again, that is exactly what the logic does, so this looks good.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, I tweaked One thing I'm wondering about was whether the throttling should be done with HTTP/2 error
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If we don't want retries, I'd use
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, that's exactly my thought as well. If one or few boxes in a cluster are overloaded, a retry from the client is desirable (as it may then end up on a new box). On the other hand, if the entire cluster is overloaded, we don't want to create a retry storm and make matters worse, by allowing a fast retry. I'll change it to
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We cannot use For this case, these two are possible options, IMO.
Reasons I suggested Another option is retuning 503, (and then
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That’s not correct. Most clients will retry as soon as they see a 503 including automatically at the Network layer. Infact, 502/503 are considered the safest status codes for the clients to automatically retry. I’ll just change it back to return (connection class + no error). I don’t think this is a big deal and the only way to prevent a retry would be If there was a custom protocol between server and client.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's a bummer if clients ignore Retry-After header sent with 503. |
||
| error = Http2Error(Http2ErrorClass::HTTP2_ERROR_CLASS_CONNECTION, Http2ErrorCode::HTTP2_ERROR_NO_ERROR, | ||
| "refused to create new stream, maxed out active connections"); | ||
| return nullptr; | ||
| } | ||
|
|
||
| // In half_close state, TS doesn't create new stream. Because GOAWAY frame is sent to client | ||
| if (ua_session->get_half_close_local_flag()) { | ||
| error = Http2Error(Http2ErrorClass::HTTP2_ERROR_CLASS_STREAM, Http2ErrorCode::HTTP2_ERROR_REFUSED_STREAM, | ||
|
|
@@ -1226,7 +1233,6 @@ Http2ConnectionState::create_stream(Http2StreamId new_id, Http2Error &error) | |
| new_stream->mutex = new_ProxyMutex(); | ||
| new_stream->is_first_transaction_flag = get_stream_requests() == 0; | ||
| increment_stream_requests(); | ||
| ua_session->get_netvc()->add_to_active_queue(); | ||
|
|
||
| return new_stream; | ||
| } | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.