-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recwarn empty #228
Comments
Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42): It seems recwarn needs some work. I am not sure what is causing the problem you are describing, though. I am hardly using warnings anymore these days. Given you are using it, you might want to check if something obvious is wrong or could be improved in |
Original comment by BitBucket: peritus, GitHub: peritus: This seems related: http://stackoverflow.com/questions/2390766/how-do-i-disable-and-then-re-enable-a-warning I had to do call
for my tests to be properly isolated. At first neither recwarn nor the warnings context manager worked as expected. Maybe that could be handled by pytest ? Also note that my tests only failed without "-k" selection. When I selected "-k warnings" all tests passed as expected. |
Original comment by Danilo Bellini (BitBucket: danilobellini, GitHub: danilobellini): The The "works only with -k" behavior isn't so weird if you know what's happening. That happened with me in the past, as well. The "-k" selects some tests, so it's deselecting a test that would raise a warning, and you should know that:
Source: http://docs.python.org/library/warnings.html So avoiding the "default", "module" and "once" actions are the only solution to that, as these would create/fill the The recwarn fixture makes/restores a backup of your filters and sets the action to "default", but doesn't change the several possible registry dicts written. To avoid cleaning these you have to:
Still another solution (or perhaps a hack) is to give a different message for every single warning done (and that was how I've found this all). As this warning behavior is innerently a collateral effect, it would be nice avoiding it to happen in other tests, but to make a more robust and self-contained test today, the If you don't like any of these solutions, you can also mock the PS: Sorry, I'm not entering StackOverflow again after my posts and comments there started to be downvoted/deleted just because I've cited contents I've made, and I feel I can't talk there about projects I've opensourced or helped with code. Ok, a diamond moderator also protected the one who said would retaliate me, but let's get back to the recwarn issue. From your comment, I think I know what you wrote there. If I'm wrong, please copy and paste the link contents here. |
@danilobellini is this still an issue? |
closing this one as unfixable from python, #840 seems a good followup |
Originally reported by: Danilo Bellini (BitBucket: danilobellini, GitHub: danilobellini)
Problem: Test if a function "creates" warnings or if it don't create any.
Solution:
The code is in:
https://github.com/danilobellini/audiolazy/blob/0fe2586bb81d9bc266998ce478a267a2659d4076/audiolazy/test/test_stream.py#L235
However, I think I should be using the recwarn funcarg. I tryed this:
That did't work: the test in the link works, the test above "misses"/filters some warn() calls, and an error is given when "recwarn.pop()" is called with an empty recwarn. But only when used_copies > 0 (i.e., it works when used_copies == 0 and copies > 0).
Also, the recwarn.list isn't documented anywhere. I wanted to assert "there's no warning" and found no documented way with recwarn. Perhaps I can use the pop() method in a try block, but I think the list comparison is simpler.
The text was updated successfully, but these errors were encountered: