Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disallow test skipping #1364

Closed
sils opened this issue Feb 7, 2016 · 11 comments
Closed

disallow test skipping #1364

sils opened this issue Feb 7, 2016 · 11 comments
Labels
type: question general question, might be closed after 2 weeks of inactivity

Comments

@sils
Copy link

sils commented Feb 7, 2016

Is there any means that allow me to make pytest fail if any tests are skipped (we use test skipping mainly for not installed dependencies). I couldn't find anything when googling.

@RonnyPfannschmidt
Copy link
Member

there is nothing builtin, you could repalce improt_or_skip with something own

@RonnyPfannschmidt RonnyPfannschmidt added the type: question general question, might be closed after 2 weeks of inactivity label Feb 10, 2016
@AbdealiLoKo
Copy link
Contributor

Could you explain what you mean by replacing import_or_skip ?
I cannot find that anywhere in pytest.

I tried something like this in my conftest:

def pytest_runtest_call(item):
    if not isinstance(item, _pytest.doctest.DoctestTextfile):
        evalskip = getattr(item, '_evalskip', None)
        if evalskip is not None and evalskip.istrue():
            item.addFailure(None, "Test was skipped")

Where I was expecting to check if the test was skipped, and add a failure if it really was skipped - but it didn't really work (No idea why)
My understanding of the pytest framework is limited - any help would be appreciated

@nicoddemus
Copy link
Member

import_or_skip('module') will skip the test if module could not be imported... that's what @RonnyPfannschmidt meant.

If you want the test to fail if a dependency is not installed then you shouldn't be using skip IMHO. Skip is meant to be used when it is OK for a test to be skipped due to expected constrains (such a windows test running on linux). If you want the test to fail, simply use pytest.fail instead of pytest.skip.

@The-Compiler
Copy link
Member

FWIW you could probably change the test outcome after the fact, similar to what I do in pytest-vw. Not that I'd recommend it (see what @nicoddemus said), but I'm guilty by writing that plugin anyways 😆

@AbdealiLoKo
Copy link
Contributor

@nicoddemus I seem to need this again, so I'm restarting this discussion.
The reason I want to fail tests I marked as skipped is because of CI.

In my CI, I want to ensure that none of my tests skipped. Because that means I didn't set up something up correctly. A while back, Travis had a bug travis-ci/travis-ci#5405 and because of that none of the apt packages were installed in some of our jobs :/
pytest was quietly skipping them and we didn't even know because we were happy with the green builds.

I understand that this can be avoided by checking all the dependencies before running pytest ... but I think doing this in pytest would be much easier.

So, ideally I would like to mention which tests (by name, or maybe how many tests) I wouldn't mind being skipped in my travis - would that be possible with a conftest ?

@nicoddemus
Copy link
Member

would that be possible with a conftest ?

Probably yes. How are you marking tests that should be skipped locally if a dependency is missing, but should fail on CI in the same situation?

For illustration, I would adopt an explicit mark for that purpose and handle that in a conftest.py file:

# test file
@pytest.mark.check_dep('pillow')
def test_image_blur():
    ... 


# conftest.py
@pytest.fixture(autouse=True)
def handle_check_dep_markers(request):
    m = request.item.get_marker('check_dep')
    if m:
        module_name = m.args[0]
        try:
            __import__(module_name)
            available = True
        except ImportError:
            available = False

        running_on_ci = 'JENKINS_URL' in os.environ
        if not available:
            message = 'Missing required module: %s' % module_name
            if running_on_ci:
                pytest.fail(message)
            else:
                pytest.skip(message)        

(Note: untested, just giving the general idea)

Such mark would skip the test locally, but fail when running in the CI.

@AbdealiLoKo
Copy link
Contributor

@nicoddemus Thanks for that ! Sadly, it may not suit my needs.
We normally use @unittest.skipIf, @unittest.skipUnless so that it's compatible with unittest and nose also (Some devs like nose, others prefer unittest ...).
But other than that, I can tweak your example code to also read a list of acceptable failed tests (from an env variable or a file) and fail as appropriate. So, I'm impressed by the flexibility that would provide 👍

I think I should be able to override the unittest decorators to become a pytest fixture if the tests are being run by pytest (Im guessing there would be some way to detect this in the code), but it's a little hacky.

Would it be possible to do a similar thing with the unittest.skipIf stuff ? Is there a hook for that that possibly I could use/create ?

@nicoddemus
Copy link
Member

Oh OK that brings more light to the subject, thanks!

I'm pretty sure unittest.skipIf decorates the function with some attribute that you can inspect in the fixture declared in the conftest.py by looking at request.item.obj (which will be a method if I'm correct).

Other than that pytest doesn't really know about unittest.skipIf decorator.

@jayvdb
Copy link
Contributor

jayvdb commented Jun 6, 2017

coala (org which needed this) has found a hackish way to achieve this, by converting skips to errors with https://pypi.python.org/pypi/pytest-error-for-skips , and also by reach 100% coverage and enforcing it with pytest-cov, which is another way to indirectly catch skips as they usually result in code not being reached.

@nicoddemus
Copy link
Member

@jayvdb thanks for sharing that! 👍

@nineteendo
Copy link

You can combine xfail() with fail():

pytest.xfail("module unavailable")
pytest.fail("module unavailable")

You can run it locally with pytest:

PS C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2> pytest   
================================================================================ test session starts =================================================================================
platform win32 -- Python 3.12.4, pytest-8.3.2, pluggy-1.5.0
rootdir: C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2
collected 437 items

alpha\src\jsonyx\test\test_jsonyx.py x..........................x.x.x.x.                                                                                                        [  8%]
alpha\src\jsonyx\test\test_loads.py xxx...xxx...x.xxx...xxx...xxx...xxx...xxxxxxxxxxxx............xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..............................xx..xx..xx..xxxxx [ 39%]
xxxxxxxxxxxxxxxxxxxxxx...........................xxxxxxxxxxxxxxx...............xxx...xxxxxxxxxx..........xxxxx.....xxxxx.....xxxxxx......xxxxxxxxxx..........xxxxxxx.......xxxx [ 79%]
xxx.......xxxxxxxxxx..........x.x.xx..xx..xx..xx..xxx...xxxxx.....xxxxxx......x.xxx...x.                                                                                        [100%]

========================================================================== 231 passed, 206 xfailed in 4.69s ========================================================================== 

And with pytest -x --runxfail on CI:

PS C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2> pytest -x --runxfail
================================================================================ test session starts =================================================================================
platform win32 -- Python 3.12.4, pytest-8.3.2, pluggy-1.5.0
rootdir: C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2
collected 437 items

alpha\src\jsonyx\test\test_jsonyx.py E

======================================================================================= ERRORS ======================================================================================= 
____________________________________________________________________ ERROR at setup of test_duplicate_key[cjson] _____________________________________________________________________

request = <SubRequest 'json' for <Function test_duplicate_key[cjson]>>

    @pytest.fixture(params=[cjson, pyjson], ids=["cjson", "pyjson"], name="json")
    def get_json(request: pytest.FixtureRequest) -> ModuleType:
        """Get JSON module."""
        json: ModuleType | None = request.param
        if json is None:
            pytest.xfail("module unavailable")
>           pytest.fail("module unavailable")
E           Failed: module unavailable

alpha\src\jsonyx\test\__init__.py:30: Failed
============================================================================== short test summary info ===============================================================================
ERROR alpha/src/jsonyx/test/test_jsonyx.py::test_duplicate_key[cjson] - Failed: module unavailable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================================================== 1 error in 0.50s ==================================================================================

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: question general question, might be closed after 2 weeks of inactivity
Projects
None yet
Development

No branches or pull requests

7 participants