-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shutdown logic: Only wait on handlers #8495
Conversation
for more information, see https://pre-commit.ci
aiohttp/web_server.py
Outdated
@@ -69,6 +72,7 @@ def pre_shutdown(self) -> None: | |||
async def shutdown(self, timeout: Optional[float] = None) -> None: | |||
coros = (conn.shutdown(timeout) for conn in self._connections) | |||
await asyncio.gather(*coros) | |||
print("LENGTH", len(self._connections)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to assert the number of connections at this point, but can't think of a good way to get this in the tests. Any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe patch the shutdown
function and see how many calls are made to it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that works. The number of shutdown() calls doesn't matter, it's the number of them which didn't result in the connections being removed from the set as part of the done callback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if I can figure out how to get a reference to the Server object, then I could patch the .clear() call to check how many are still in the set when this is called.
Will test later today. Still catching up as I have a house full of family that doesn't have power from the hurricane in Houston |
I get the following in HA when I backported this to 3.9 and updated with the change backported as bdraco@6aa52bd |
Seem to be missing a backport of #4200. I assume it subclasses BaseRequest? In which case I expect it will work with that backported. |
I'll make another integration branch with that backport to 3.9 I can't test with 3.10 yet until #8482 is backported or I make an integration branch for that. But since HA is on 3.9 better to test with 3.9 |
#4200 has a whole lot of conflicts so I'll wait for the backport to happen instead of trying to patch it into the integration branch as there is a chance I'll screw it up and give you a bad test result. |
Should be sorted in #8504. |
Tested with HA on 3.9+ this PR + #8504 cherry picked on top Restarted a few times. Everything seems ok |
Still needs a test and changelog, manual functional testing passed. |
Adding a skip for now so we can get this into 3.10b0 as Sam is going to do another turn on this to add the test / changelog / docs before the stable release |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #8495 +/- ##
==========================================
+ Coverage 97.54% 97.67% +0.13%
==========================================
Files 108 107 -1
Lines 33549 33286 -263
Branches 4027 3918 -109
==========================================
- Hits 32724 32513 -211
+ Misses 601 559 -42
+ Partials 224 214 -10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Backport to 3.10: 💔 cherry-picking failed — conflicts found❌ Failed to cleanly apply 549c95b on top of patchback/backports/3.10/549c95b948dcddd6588f95545ad6c856f693c503/pr-8495 Backporting merged PR #8495 into master
🤖 @patchback |
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: J. Nick Koston <nick@koston.org> (cherry picked from commit 549c95b)
…ers (#8530) Co-authored-by: pre-commit-ci[bot] Co-authored-by: J. Nick Koston <nick@koston.org> Co-authored-by: Sam Bull <git@sambull.org>
In aiohttp 3.9 we introduced new shutdown logic in run_app() which waited for all pending tasks to complete. This has caused issues with libraries that spawn idle tasks at runtime to wait for connections.
This PR removes that logic and instead just waits on all active request handlers. Documentation will be updated to teach users how to manage tasks themselves.
@bdraco Please confirm this still works correctly with HA. The only difference should be that on application shutdown, active request handlers are now given time to complete instead of being cancelled immediately.
Fixes #8387