-
-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
running a suite with no tests is not an error #62432
Comments
In bug https://bugs.launchpad.net/subunit/+bug/586176 I recorded a user request - that if no tests are found, tools consuming subunit streams should be able to consider that an error. There is an analogous situation though, which is that if discover returns without error, running the resulting suite is worthless, as it has no tests. This is a bit of a sliding slope - what if discover finds one test when there should be 1000's ? Anyhow, filing this because there's been a few times when things have gone completely wrong that it would have helped CI systems detect that. (For instance, the tests package missing entirely, but tests were being scanned in the whole source tree, so no discover level error occurred). I'm thinking I'll add a '--min-tests=X' parameter to unittest.main, with the semantic that if there are less than X tests executed, the test run will be considered a failure, and folk can set this to 1 for the special case, or any arbitrary figure that they want for larger suites. |
I do not quite see the need to complicate the interface for most users in a way that does not really solve all of the realistic problems. import unittest
unittest.main()
#
Ran 0 tests in 0.000s OK |
The minimum number of tests is a fast moving target, and unless you know exactly how many tests you have and use that value, missing tests will be undetected. If you only want to distinguish between 0 and more tests, a boolean flag is enough, but checking that at least 1 test in the whole test suite is run is quite pointless IMHO (I assume it's quite easy to notice if/when it happens). Making this per-module or even per-class would be more interesting (because it's harder to spot these problems), but OTOH there's no way to know for sure if this is what the user wants. A good compromise might be using a boolean flag that generates a warning by using some heuristic (e.g. test discovery found a test_.py file that defines no tests, or a TestCase class that defines no test_ methods and has no subclasses (or have no subclasses with test_* methods)). |
@terry in principle you're right, there are an arbitrary number of things that can go wrong, but in practice what we see is either catastrophic failure where nothing is loaded at all *and* no error is returned or localised failure where the deferred reporting of failed imports serves quite well enough. The former is caused by things like the wrong path in a configuration file. @ezio sure - a boolean option would meet the needs reported to me, I was suggesting a specific implementation in an attempt to be generic enough to not need to maintain two things if more was added in future. |
You missed my point, which is that tools consuming subunit streams are already able to consider 'no tests found' to be an error. Conversely, when I run the suite on my Windows box, I usually consider only 1 or 2 errors to be success. After unittest reports actual results, the summary pass/fail judgment is only advisory. To be really flexible and meet all needs for automated adjustment of pass/fail, the new parameter should be function that gets the numbers and at least the set of tests that 'failed'. |
I'd agree that a test run that actually runs zero tests almost always indicates an error, and it would be better if this was made clear. I have this problem a great deal with Go, where the test tools are awful, and it's very easy to think you have a successful test run (PASS) when you actually ran zero tests. Particularly with discovery you will want to know your invocation is wrong. I'm agnostic on a new "--min-tests" parameter, but having zero tests found should return a non-zero exit code and display a warning. |
I'm not convinced we need something that complex here but I think it would make sense to make 'unittest discover' fail when it doesn't discover a single test. As packagers, we've been bitten more than once by packages whose tests suddenly stopped being discovered, and it would be really helpful if we were able to catch this automatically without having to resort to hacks. |
With more experience, I agree that 0/0 tests passing should not be a pass. |
I don't know unittest test runner. I have no opinion. Changing the behavior would be backward incompatible. As a libregrtest user.... and as the author of the change (!), I like the new Python test runner (libregrtest) behavior, obviously :-) https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/5 |
I think it should be an error, for reasons explained in the thread. |
As discussed in https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/7 It is common for test runner misconfiguration to fail to find any tests, this should be an error. Fixes: python#62432
As discussed in https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/7 It is common for test runner misconfiguration to fail to find any tests, this should be an error. Fixes: python#62432
As discussed in https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/7 It is common for test runner misconfiguration to fail to find any tests, This should be an error. Fixes: #62432
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
Linked PRs
The text was updated successfully, but these errors were encountered: