Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running a suite with no tests is not an error #62432

Closed
rbtcollins opened this issue Jun 16, 2013 · 12 comments · Fixed by #102051
Closed

running a suite with no tests is not an error #62432

rbtcollins opened this issue Jun 16, 2013 · 12 comments · Fixed by #102051
Labels
tests Tests in the Lib/test dir type-feature A feature request or enhancement

Comments

@rbtcollins
Copy link
Member

rbtcollins commented Jun 16, 2013

BPO 18232
Nosy @terryjreedy, @rbtcollins, @ezio-melotti, @voidspace, @mgorny, @kamilturek
PRs
  • bpo-18232: Return unsuccessfully if no unit tests were run #24893
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = None
    created_at = <Date 2013-06-16.19:32:39.867>
    labels = ['type-feature', 'tests', '3.10']
    title = 'running a suite with no tests is not an error'
    updated_at = <Date 2021-03-16.12:25:51.548>
    user = 'https://github.com/rbtcollins'

    bugs.python.org fields:

    activity = <Date 2021-03-16.12:25:51.548>
    actor = 'mgorny'
    assignee = 'none'
    closed = False
    closed_date = None
    closer = None
    components = ['Tests']
    creation = <Date 2013-06-16.19:32:39.867>
    creator = 'rbcollins'
    dependencies = []
    files = []
    hgrepos = []
    issue_num = 18232
    keywords = ['patch']
    message_count = 8.0
    messages = ['191282', '191611', '192331', '226617', '226620', '226631', '388669', '388682']
    nosy_count = 6.0
    nosy_names = ['terry.reedy', 'rbcollins', 'ezio.melotti', 'michael.foord', 'mgorny', 'kamilturek']
    pr_nums = ['24893']
    priority = 'normal'
    resolution = None
    stage = 'patch review'
    status = 'open'
    superseder = None
    type = 'enhancement'
    url = 'https://bugs.python.org/issue18232'
    versions = ['Python 3.10']

    Linked PRs

    @rbtcollins
    Copy link
    Member Author

    In bug https://bugs.launchpad.net/subunit/+bug/586176 I recorded a user request - that if no tests are found, tools consuming subunit streams should be able to consider that an error.

    There is an analogous situation though, which is that if discover returns without error, running the resulting suite is worthless, as it has no tests. This is a bit of a sliding slope - what if discover finds one test when there should be 1000's ?

    Anyhow, filing this because there's been a few times when things have gone completely wrong that it would have helped CI systems detect that. (For instance, the tests package missing entirely, but tests were being scanned in the whole source tree, so no discover level error occurred).

    I'm thinking I'll add a '--min-tests=X' parameter to unittest.main, with the semantic that if there are less than X tests executed, the test run will be considered a failure, and folk can set this to 1 for the special case, or any arbitrary figure that they want for larger suites.

    @ezio-melotti ezio-melotti added tests Tests in the Lib/test dir type-feature A feature request or enhancement labels Jun 16, 2013
    @terryjreedy
    Copy link
    Member

    I do not quite see the need to complicate the interface for most users in a way that does not really solve all of the realistic problems.

    import unittest
    unittest.main()
    #
    Ran 0 tests in 0.000s

    OK
    ---
    It seems to me that a continuous integration system should parse out the tests run, ok, failed or errored, skipped (or use a lower level interface to grab the numbers before being printed), report them, and compare to previous numbers. Even one extra skip might be something to be explained. An 'arbitrary' figure could easily not detect real problems.

    @ezio-melotti
    Copy link
    Member

    I'm thinking I'll add a '--min-tests=X' parameter to unittest.main,
    with the semantic that if there are less than X tests executed, the
    test run will be considered a failure,

    The minimum number of tests is a fast moving target, and unless you know exactly how many tests you have and use that value, missing tests will be undetected. If you only want to distinguish between 0 and more tests, a boolean flag is enough, but checking that at least 1 test in the whole test suite is run is quite pointless IMHO (I assume it's quite easy to notice if/when it happens).

    Making this per-module or even per-class would be more interesting (because it's harder to spot these problems), but OTOH there's no way to know for sure if this is what the user wants. A good compromise might be using a boolean flag that generates a warning by using some heuristic (e.g. test discovery found a test_.py file that defines no tests, or a TestCase class that defines no test_ methods and has no subclasses (or have no subclasses with test_* methods)).

    @rbtcollins
    Copy link
    Member Author

    @terry in principle you're right, there are an arbitrary number of things that can go wrong, but in practice what we see is either catastrophic failure where nothing is loaded at all *and* no error is returned or localised failure where the deferred reporting of failed imports serves quite well enough.

    The former is caused by things like the wrong path in a configuration file.

    @ezio sure - a boolean option would meet the needs reported to me, I was suggesting a specific implementation in an attempt to be generic enough to not need to maintain two things if more was added in future.

    @terryjreedy
    Copy link
    Member

    You missed my point, which is that tools consuming subunit streams are already able to consider 'no tests found' to be an error. Conversely, when I run the suite on my Windows box, I usually consider only 1 or 2 errors to be success. After unittest reports actual results, the summary pass/fail judgment is only advisory.

    To be really flexible and meet all needs for automated adjustment of pass/fail, the new parameter should be function that gets the numbers and at least the set of tests that 'failed'.

    @voidspace
    Copy link
    Contributor

    I'd agree that a test run that actually runs zero tests almost always indicates an error, and it would be better if this was made clear.

    I have this problem a great deal with Go, where the test tools are awful, and it's very easy to think you have a successful test run (PASS) when you actually ran zero tests.

    Particularly with discovery you will want to know your invocation is wrong.

    I'm agnostic on a new "--min-tests" parameter, but having zero tests found should return a non-zero exit code and display a warning.

    @mgorny
    Copy link
    Mannequin

    mgorny mannequin commented Mar 14, 2021

    I'm not convinced we need something that complex here but I think it would make sense to make 'unittest discover' fail when it doesn't discover a single test. As packagers, we've been bitten more than once by packages whose tests suddenly stopped being discovered, and it would be really helpful if we were able to catch this automatically without having to resort to hacks.

    @terryjreedy
    Copy link
    Member

    With more experience, I agree that 0/0 tests passing should not be a pass.

    @mgorny mgorny mannequin added the 3.10 only security fixes label Mar 16, 2021
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    @gpshead gpshead removed the 3.10 only security fixes label Nov 29, 2022
    @gpshead
    Copy link
    Member

    gpshead commented Nov 29, 2022

    @terryjreedy
    Copy link
    Member

    #98903 makes the Python test suite fail is no test are run.
    @vstinner Do you think unittest should be modified?

    @vstinner
    Copy link
    Member

    @vstinner Do you think unittest should be modified?

    I don't know unittest test runner. I have no opinion. Changing the behavior would be backward incompatible.

    As a libregrtest user.... and as the author of the change (!), I like the new Python test runner (libregrtest) behavior, obviously :-) https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/5

    @merwok
    Copy link
    Member

    merwok commented Nov 29, 2022

    I think it should be an error, for reasons explained in the thread.
    Using the same error code as pytest seems good.

    stefanor added a commit to stefanor/cpython that referenced this issue Feb 19, 2023
    As discussed in https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/7
    
    It is common for test runner misconfiguration to fail to find any tests,
    this should be an error.
    
    Fixes: python#62432
    stefanor added a commit to stefanor/cpython that referenced this issue Feb 19, 2023
    As discussed in https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/7
    
    It is common for test runner misconfiguration to fail to find any tests,
    this should be an error.
    
    Fixes: python#62432
    gpshead pushed a commit that referenced this issue Apr 27, 2023
    As discussed in https://discuss.python.org/t/unittest-fail-if-zero-tests-were-discovered/21498/7
    
    It is common for test runner misconfiguration to fail to find any tests,
    This should be an error.
    
    Fixes: #62432
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    tests Tests in the Lib/test dir type-feature A feature request or enhancement
    Projects
    Status: Done
    Development

    Successfully merging a pull request may close this issue.

    7 participants