Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exit non-zero if no tests were run #812

Closed
esiegerman opened this issue Jul 2, 2015 · 24 comments
Closed

Exit non-zero if no tests were run #812

esiegerman opened this issue Jul 2, 2015 · 24 comments
Labels
type: enhancement new feature or API change, should be merged into features branch

Comments

@esiegerman
Copy link

[Split out of https://github.com//issues/500#issuecomment-112204804 into a separate issue]

If no tests were run, pytest should exit with a non-zero status.

nicoddemus added a commit to nicoddemus/pytest that referenced this issue Jul 4, 2015
nicoddemus added a commit to nicoddemus/pytest that referenced this issue Jul 4, 2015
nicoddemus added a commit to nicoddemus/pytest that referenced this issue Jul 4, 2015
nicoddemus added a commit to nicoddemus/pytest that referenced this issue Jul 4, 2015
@nicoddemus nicoddemus added the type: enhancement new feature or API change, should be merged into features branch label Jul 5, 2015
@mostafahussein
Copy link

I think SKIPPED tests also should have a non-zero exit !

@nicoddemus
Copy link
Member

@sh3llsh0ck you mean if a test session skipped all collected tests?

@mostafahussein
Copy link

if there is a skipped test , like when there is a failed test with other passed tests, the whole test exit with 1

in my case i have a custom option to pass parameter to test function, what if i forgot to pass this parameter ?

i saw the mentioned comment says the non-zero exit required for jenkins.
and in my case too when i am going to use jenkins, if i forgot to add a parameter test will be SKIPPED but jenkins will considered the test is passed (due to zero exit) which is not happened.

so the non-zero exit will make me revise jenkins configuration in this case to see if i forgot to pass a parameter or something

@nicoddemus
Copy link
Member

"Skip" is meant for tests that are OK to be skipped under the current configuration. It is common for example to skip tests are specific to a platform when running in a different platform, or tests meant for a specific Python version to skip in other Python versions. In all those cases returning non-zero would break the test suite, which is not desirable.

For your specific case, wouldn't make more sense to make your test function to fail instead of skipping if the test does not receive the parameter?

@mostafahussein
Copy link

so if it is not OK , I have to add something to make it fail instead of skip, so i can make the test fail if the required python version or any required package version is not as expected, right ?

i have followed the documentation in order to add a custom option, but there is nothing about the status:

for example, why metafunction.parametrize does not have an option called if_missing default to skip?

Metafunc.parametrize(argnames, argvalues, indirect=False, ids=None, scope=None, if_missing='Skip')

or there is another way to make it fail when i forgot to pass a parameter that is required ?

@nicoddemus
Copy link
Member

Can you post some minimum example of what you are trying to accomplish? Best create a new issue for that, as this is getting off-topic.

@ghostsquad
Copy link

I definitely agree that if no tests are run, that's a problem.

@nicoddemus
Copy link
Member

I definitely agree that if no tests are run, that's a problem.

We will merge #817 soon with that. 😄

@ghostsquad
Copy link

@nicoddemus - Your profile picture is the best. I love those penguins. Thanks for the quick response!

@nicoddemus
Copy link
Member

@ghostsquad thanks! 😊

@landier
Copy link

landier commented Nov 6, 2015

Hello,
It sounds weird to return 5 if no tests are run.
For instance, the case where no tests are run because there are no tests will break a build if the result is not 0.
Other test tools (msbuild, nose, nose2, ...) do not have such behaviors.

@The-Compiler
Copy link
Member

@landier I disagree (and many other people too it seems) - if you don't have any tests, why bother running pytest?

It's easy to accidentally run -k with some wrong filter, or do some other mistake, and accidentally not run any tests without noticing.

Maybe a config option should be added?

@landier
Copy link

landier commented Nov 6, 2015

To answer your question: I'm setting a continuous integration up around pytest and it feels very strange to have the whole build process failing because there is no test yet even if everything is fine. As I said, other test/build tools don't act the way pytest does. A good point would be to be compliant with what, is de facto, a standard for such tools.

Besides I strongly disagree with the fact that running no test is a failure. A failure is a test failing.

A config option is way to fix this.

@nicoddemus
Copy link
Member

@landier it was decided by the vast majority that no tests run should be a failure, as @The-Compiler said. Please follow the (lengthy) discussion at #500.

A config option to change this behavior is perhaps acceptable if more people feel the same way.

@ghostsquad
Copy link

I like the idea of "no tests found" as a warning, and a config option to treat warnings as errors. That way, you can build up your CI without tests at first, and maybe with a few other "problems" that just bubble up as warnings. Then as you button things up, switch over to "treat warnings as errors".

@nicoddemus
Copy link
Member

I also have thought about a configuration option which would turn pytest-warnings into an error (return non-zero), like most compilers support.

@ghostsquad would you mind opening a separate issue for this? I think this is independent for the discussion, although it is certainly related.

@blueyed
Copy link
Contributor

blueyed commented Dec 16, 2015

btw: with tools like https://github.com/tarpas/pytest-testmon it might happen that no tests were run, which is then considered by an outer pytest-watch to be an error (and its --onfail action is triggered).

Do you think pytest-testmon should make pytest return 0 in that case (if that's possible already?), or should pytest-watch handle 5 also as a "pass"?

Ref: tarpas/pytest-testmon#31

@nicoddemus
Copy link
Member

Not sure, it would depend on how users use pytest-testmon, if no tests executed should be regarded as a problem or not.

@blueyed
Copy link
Contributor

blueyed commented Dec 16, 2015

It's not a problem: pytest-testmon deselects all tests that do not need to be run, because no code was changed in covered parts (via coverage.py). It is an awesome tool btw, and with something like pytest-watch or similar tools it allows for a very good TDD setup.

For now I've submitted a PR to pytest-watch: joeyespo/pytest-watch#42.
After all, when using pytest-watch you would take greater care to select your tests (e.g. no accidental typos with -k), and then there is less reason to error out. Also, pytest-watch is typically invoked manually / on request and not in test suites / CI scripts.

blueyed added a commit to blueyed/pytest-watch that referenced this issue Dec 16, 2015
py.test returns exit code 5 in case no tests are run/collected.
This can happen with tools like pytest-testmon.

Ref: pytest-dev/pytest#812
Ref: tarpas/pytest-testmon#31
@WloHu
Copy link

WloHu commented Jul 30, 2019

Sorry for necro bumping.

@nicoddemus

A config option to change this behavior is perhaps acceptable if more people feel the same way.

I would also like it to be configurable. Should I open a new issue?

I've read liked threads and couldn't find if such flag was added. I've only found configuration related to treating warnings as errors and nothing about changing exit codes.

@nicoddemus
Copy link
Member

Sorry for necro bumping.

Not at all!

Please feel free to open another issue with updated requirements given that since this issue has been closed pytest has changed a bit. 👍

alterapars pushed a commit to alterapars/drought_classification that referenced this issue Sep 19, 2021
Also introduces the "slow" and "authentication" pytest mark to skip tests which are slow or require authentication. Due to pytest-dev/pytest#812 it also adds a dummy test to ensure a 0 exit-status when no tests are run.
@Marc--Olivier
Copy link

Marc--Olivier commented Feb 22, 2024

For information, I ran into this issue today while running pytest --cov=. on a directory that had no test so that I can collect the code coverage. I was definitely not expecting the command to fail, especially because other tools like unittest don't fail. (see #2393 (comment))

I personally disagree with this decision of returning an error code when no test is executed because it makes it harder to explain when pytest succeeds:

  • before: pytest succeeds if no test fails
  • now: pytest succeeds if no test fails and at least one test is executed.

I have tried to understand the rationale of this decision by reading this ticket and #500. As far as I understand, the only reason is that making pytest return an error helps users to detect when they make mistakes in the filters of their tests in their CI pipelines. To me, this looks potentially harmful because users may rely on this error code to make sure that their CI pipeline is correctly implemented instead of looking at the test log instead. In particular, some tests could run (so pytest does not return an error code) but some others could be unintentionally skipped (because of an incorrect filter), and we need to verify it by looking at the test log (or writing a script that does it for us).

Please don't hesitate to let me know if I misunderstood the rationale or am missing other valid use cases.

Many thanks in advance for taking into consideration this code coverage use case and also the definition of pytest successfully running.

Best regards,

Marc-Olivier

@The-Compiler
Copy link
Member

I don't follow as far as your use-case goes - what's the point of "collecting the code coverage" of nothing?

@Marc--Olivier
Copy link

I don't follow as far as your use-case goes - what's the point of "collecting the code coverage" of nothing?

Thanks for the question! 👍

I am working using a monorepo workflow and need the aggregated code coverage of all the projects within this Git repo. So I need to collect the code coverage of this directory without tests as well so that this lack of tests is reflected in the aggregated code coverage. Does it make more sense now?

I also don't think that I would collect the code coverage of nothing if not in a monorepo. And I would understand that pytest does not help with the support of monorepo workflows.

However, I still think that we should have a way to let pytest return 0 is there is no test because of the complexity added to the definition of pytest success (it also opens the door to future other corner cases where people may request an error code) and the consistency with (all?) other testing tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

No branches or pull requests

9 participants