-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exit non-zero if no tests were run #812
Comments
I think SKIPPED tests also should have a non-zero exit ! |
@sh3llsh0ck you mean if a test session skipped all collected tests? |
if there is a skipped test , like when there is a failed test with other passed tests, the whole test exit with 1 in my case i have a custom option to pass parameter to test function, what if i forgot to pass this parameter ? i saw the mentioned comment says the non-zero exit required for jenkins. so the non-zero exit will make me revise jenkins configuration in this case to see if i forgot to pass a parameter or something |
"Skip" is meant for tests that are OK to be skipped under the current configuration. It is common for example to skip tests are specific to a platform when running in a different platform, or tests meant for a specific Python version to skip in other Python versions. In all those cases returning non-zero would break the test suite, which is not desirable. For your specific case, wouldn't make more sense to make your test function to fail instead of skipping if the test does not receive the parameter? |
so if it is not OK , I have to add something to make it fail instead of skip, so i can make the test fail if the required python version or any required package version is not as expected, right ? i have followed the documentation in order to add a custom option, but there is nothing about the status: for example, why Metafunc.parametrize(argnames, argvalues, indirect=False, ids=None, scope=None, if_missing='Skip') or there is another way to make it fail when i forgot to pass a parameter that is required ? |
Can you post some minimum example of what you are trying to accomplish? Best create a new issue for that, as this is getting off-topic. |
I definitely agree that if no tests are run, that's a problem. |
We will merge #817 soon with that. 😄 |
@nicoddemus - Your profile picture is the best. I love those penguins. Thanks for the quick response! |
@ghostsquad thanks! 😊 |
Hello, |
@landier I disagree (and many other people too it seems) - if you don't have any tests, why bother running pytest? It's easy to accidentally run Maybe a config option should be added? |
To answer your question: I'm setting a continuous integration up around pytest and it feels very strange to have the whole build process failing because there is no test yet even if everything is fine. As I said, other test/build tools don't act the way pytest does. A good point would be to be compliant with what, is de facto, a standard for such tools. Besides I strongly disagree with the fact that running no test is a failure. A failure is a test failing. A config option is way to fix this. |
@landier it was decided by the vast majority that no tests run should be a failure, as @The-Compiler said. Please follow the (lengthy) discussion at #500. A config option to change this behavior is perhaps acceptable if more people feel the same way. |
I like the idea of "no tests found" as a warning, and a config option to treat warnings as errors. That way, you can build up your CI without tests at first, and maybe with a few other "problems" that just bubble up as warnings. Then as you button things up, switch over to "treat warnings as errors". |
I also have thought about a configuration option which would turn pytest-warnings into an error (return non-zero), like most compilers support. @ghostsquad would you mind opening a separate issue for this? I think this is independent for the discussion, although it is certainly related. |
btw: with tools like https://github.com/tarpas/pytest-testmon it might happen that no tests were run, which is then considered by an outer pytest-watch to be an error (and its Do you think pytest-testmon should make pytest return 0 in that case (if that's possible already?), or should pytest-watch handle 5 also as a "pass"? |
Not sure, it would depend on how users use pytest-testmon, if no tests executed should be regarded as a problem or not. |
It's not a problem: pytest-testmon deselects all tests that do not need to be run, because no code was changed in covered parts (via coverage.py). It is an awesome tool btw, and with something like pytest-watch or similar tools it allows for a very good TDD setup. For now I've submitted a PR to pytest-watch: joeyespo/pytest-watch#42. |
py.test returns exit code 5 in case no tests are run/collected. This can happen with tools like pytest-testmon. Ref: pytest-dev/pytest#812 Ref: tarpas/pytest-testmon#31
Sorry for necro bumping.
I would also like it to be configurable. Should I open a new issue? I've read liked threads and couldn't find if such flag was added. I've only found configuration related to treating warnings as errors and nothing about changing exit codes. |
Not at all! Please feel free to open another issue with updated requirements given that since this issue has been closed pytest has changed a bit. 👍 |
Also introduces the "slow" and "authentication" pytest mark to skip tests which are slow or require authentication. Due to pytest-dev/pytest#812 it also adds a dummy test to ensure a 0 exit-status when no tests are run.
For information, I ran into this issue today while running I personally disagree with this decision of returning an error code when no test is executed because it makes it harder to explain when pytest succeeds:
I have tried to understand the rationale of this decision by reading this ticket and #500. As far as I understand, the only reason is that making pytest return an error helps users to detect when they make mistakes in the filters of their tests in their CI pipelines. To me, this looks potentially harmful because users may rely on this error code to make sure that their CI pipeline is correctly implemented instead of looking at the test log instead. In particular, some tests could run (so pytest does not return an error code) but some others could be unintentionally skipped (because of an incorrect filter), and we need to verify it by looking at the test log (or writing a script that does it for us). Please don't hesitate to let me know if I misunderstood the rationale or am missing other valid use cases. Many thanks in advance for taking into consideration this code coverage use case and also the definition of pytest successfully running. Best regards, Marc-Olivier |
I don't follow as far as your use-case goes - what's the point of "collecting the code coverage" of nothing? |
Thanks for the question! 👍 I am working using a monorepo workflow and need the aggregated code coverage of all the projects within this Git repo. So I need to collect the code coverage of this directory without tests as well so that this lack of tests is reflected in the aggregated code coverage. Does it make more sense now? I also don't think that I would collect the code coverage of nothing if not in a monorepo. And I would understand that However, I still think that we should have a way to let |
[Split out of https://github.com//issues/500#issuecomment-112204804 into a separate issue]
If no tests were run, pytest should exit with a non-zero status.
The text was updated successfully, but these errors were encountered: