-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
pytest.raises swallows warning about unawaited coroutine #10404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This sounds like a classic "holding the traceback keeps everything in referenced frames alive" problem, which we will want to fix. That said, there is no guarantee that garbage collection will run at any particular time (or at all!) even if we do fix it, so I don't consider this a serious bug. |
Even if you do a |
A next generation raises could return a intentionally cleaned up traveback representation I'm not Fond of the py.code remnant, but it's quite a bit of work to make it nice |
I wasted some time reading all issues here to find how to fail a test when a coroutine was not awaited. In case if someone will need a solution: # pytest.ini
filterwarnings=
error:coroutine .* was never awaited:RuntimeWarning
# A trick to fail the `PytestUnraisableExceptionWarning: Exception ignored in:` which happens when we catch a `was never awaited` warning
error:.*Exception ignored in.*::_pytest |
That's unrelated to this issue though. We use |
type checkers (at least mypy) now warns about unused coroutines, and ruff is working on it as well astral-sh/ruff#9911, so most instances of this can be caught with static analysis. |
In the trivial example above, yes. In real world use cases we've seen, no. We've been using that option in mypy for several years, it cannot catch the issues we encountered (and still produces many false positives with TaskGroup etc.), which were caused by something deeper in the library with more complex low-level logic. |
I think this is a bug in create_task. Also FYI this doesn't happen with unittest assertRaises because it clears the Traceback. I don't think pytest should do this however as in some tests I rely on the Traceback not being cleared |
hmm, really? It doesn't seem to do anything particularly spicy, I can minify the repro to this: import pytest
async def my_task():
pass
def blah(coro):
raise RuntimeError
def test_scheduler_must_be_created_within_running_loop() -> None:
with pytest.raises(RuntimeError) as _:
blah(my_task()) |
indeed its not a bug with create_task unit test annihilates the traceback in https://github.com/python/cpython/blob/6245ee279db8d85d7a72bc6ec24021eab06987fe/Lib/unittest/case.py#L253-L278 pytest does not annihilate the traceback keeping the traceback prevents timely GC |
I'm failing to understand how one could clean up the traceback to avoid this issue? I at first thought we could just skip some frames in it, but it seems this needs to be a fairly involved modification to manage to make any difference.
I tried to add a Always annihilating the traceback is probably not great, but I could see a keyword argument |
Definitely make clearing the traceback opt in rather than opt out I suspect a gc.collect between each test will be quite a bit of overhead on test suites with lots of small fast tests, eg hypothesis or parameterized test. Perhaps instead it would make sense to delay unsetting the unraisablehook until the absolute last possible moment in pytest session teardown, and issue 5 gc.collect calls before unsetting it Also I think the unraisablehook is set and reset each test, I think it would be better to set it once and update a variable with the current test to associate errors with, to make sure there's no window for a gc to run between tests |
having an opt-in traceback clear won't help ~anybody with the OP's issue, there's already a warning in the documentation telling users they should be vary of garbage collection: https://docs.pytest.org/en/stable/reference/reference.html#pytest.raises so best-practice users should be deleting their caught exceptions.
this sounds like a decent approach though. You won't get the error on the correct test, but you'll get it somewhere, and the warning/error should contain enough info to figure it out. Although in that case perhaps better to signal it as a general teardown error instead of blaming some random specific test - i.e. set the variable for current test in unraisablehook to |
Let's have exceptioninfo from raises trigger a resource warning if it's not cleared after a test Also a discard traceback option might be helpful It could enable a warning and a path towards clearing by default as the stdlib does A opt in towards a policy is also thinkable |
This would raise a ResourceWarning when you failed to do: with pytest.raises(x) as exc_info:
...
with contextlib.closing(exc_info):
... I think it's cleaner to make the unraisablehook more reliable |
@graingert was this fixed with #12958? |
Almost! To use the new behavior you need to set the warning filter with the python environment variable. I'll need to make a PR to make changes to the built in warnings filters so they're applied asap and persist until the config teardown |
Binding the return value of the
pytest.raises
context manager to a variable prevents some warnings from being triggered as part of the test. As a result, the warning is not displayed in the test run summary, but only on stderr, and-W error
will not fail the test.Reproducible example:
Running the above code:
When replacing
with pytest.raises(RuntimeError) as _:
withwith pytest.raises(RuntimeError):
, everything works as expected and the warning causes the test run to fail.My understanding is that the coroutine triggers the warning when it gets garbage collected. It looks like
with pytest.raises as _
keeps a reference to the coroutine that should trigger the warning. The reference prevents garbage collection of the coroutine until after the hook wrappers inunraisableexception
have returned. As a result, the warning appears outside of the test case.Additional information
Related issues: #8021 #5676
The text was updated successfully, but these errors were encountered: