You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sfackler opened this issue
Oct 31, 2017
· 1 comment
Labels
A-serverArea: server.C-featureCategory: feature. This is adding a new feature.E-mediumEffort: medium. Some knowledge of how hyper internal works would be useful.
The graceful shutdown logic currently waits until all Services have dropped (i.e. all connections have closed). However, with keep-alive this is overconservative - what we'd really like is to wait until pending requests have finished.
This can't be correctly modeled with the current API, however. The last point in a request/response pair that user code is aware of is when it sends the last bit of the response body through the body channel. It will still take some amount of time for that bit to be written out over the network, so we can't immediately shut down after that's done.
The text was updated successfully, but these errors were encountered:
@seanmonstar pointed out that a simple way of doing this is to add a method to disable keep-alive on Connections. Then, Service::drop-based tracking will be accurate.
A-serverArea: server.C-featureCategory: feature. This is adding a new feature.E-mediumEffort: medium. Some knowledge of how hyper internal works would be useful.
The graceful shutdown logic currently waits until all
Service
s have dropped (i.e. all connections have closed). However, with keep-alive this is overconservative - what we'd really like is to wait until pending requests have finished.This can't be correctly modeled with the current API, however. The last point in a request/response pair that user code is aware of is when it sends the last bit of the response body through the body channel. It will still take some amount of time for that bit to be written out over the network, so we can't immediately shut down after that's done.
The text was updated successfully, but these errors were encountered: