-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
disallow test skipping #1364
Comments
there is nothing builtin, you could repalce improt_or_skip with something own |
Could you explain what you mean by replacing I tried something like this in my conftest:
Where I was expecting to check if the test was skipped, and add a failure if it really was skipped - but it didn't really work (No idea why) |
If you want the test to fail if a dependency is not installed then you shouldn't be using skip IMHO. Skip is meant to be used when it is OK for a test to be skipped due to expected constrains (such a windows test running on linux). If you want the test to fail, simply use |
FWIW you could probably change the test outcome after the fact, similar to what I do in pytest-vw. Not that I'd recommend it (see what @nicoddemus said), but I'm guilty by writing that plugin anyways 😆 |
@nicoddemus I seem to need this again, so I'm restarting this discussion. In my CI, I want to ensure that none of my tests skipped. Because that means I didn't set up something up correctly. A while back, Travis had a bug travis-ci/travis-ci#5405 and because of that none of the I understand that this can be avoided by checking all the dependencies before running pytest ... but I think doing this in pytest would be much easier. So, ideally I would like to mention which tests (by name, or maybe how many tests) I wouldn't mind being skipped in my travis - would that be possible with a conftest ? |
Probably yes. How are you marking tests that should be skipped locally if a dependency is missing, but should fail on CI in the same situation? For illustration, I would adopt an explicit mark for that purpose and handle that in a # test file
@pytest.mark.check_dep('pillow')
def test_image_blur():
...
# conftest.py
@pytest.fixture(autouse=True)
def handle_check_dep_markers(request):
m = request.item.get_marker('check_dep')
if m:
module_name = m.args[0]
try:
__import__(module_name)
available = True
except ImportError:
available = False
running_on_ci = 'JENKINS_URL' in os.environ
if not available:
message = 'Missing required module: %s' % module_name
if running_on_ci:
pytest.fail(message)
else:
pytest.skip(message) (Note: untested, just giving the general idea) Such mark would skip the test locally, but fail when running in the CI. |
@nicoddemus Thanks for that ! Sadly, it may not suit my needs. I think I should be able to override the unittest decorators to become a pytest fixture if the tests are being run by pytest (Im guessing there would be some way to detect this in the code), but it's a little hacky. Would it be possible to do a similar thing with the |
Oh OK that brings more light to the subject, thanks! I'm pretty sure Other than that pytest doesn't really know about |
coala (org which needed this) has found a hackish way to achieve this, by converting skips to errors with https://pypi.python.org/pypi/pytest-error-for-skips , and also by reach 100% coverage and enforcing it with pytest-cov, which is another way to indirectly catch skips as they usually result in code not being reached. |
@jayvdb thanks for sharing that! 👍 |
You can combine pytest.xfail("module unavailable")
pytest.fail("module unavailable") You can run it locally with PS C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2> pytest
================================================================================ test session starts =================================================================================
platform win32 -- Python 3.12.4, pytest-8.3.2, pluggy-1.5.0
rootdir: C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2
collected 437 items
alpha\src\jsonyx\test\test_jsonyx.py x..........................x.x.x.x. [ 8%]
alpha\src\jsonyx\test\test_loads.py xxx...xxx...x.xxx...xxx...xxx...xxx...xxxxxxxxxxxx............xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..............................xx..xx..xx..xxxxx [ 39%]
xxxxxxxxxxxxxxxxxxxxxx...........................xxxxxxxxxxxxxxx...............xxx...xxxxxxxxxx..........xxxxx.....xxxxx.....xxxxxx......xxxxxxxxxx..........xxxxxxx.......xxxx [ 79%]
xxx.......xxxxxxxxxx..........x.x.xx..xx..xx..xx..xxx...xxxxx.....xxxxxx......x.xxx...x. [100%]
========================================================================== 231 passed, 206 xfailed in 4.69s ========================================================================== And with PS C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2> pytest -x --runxfail
================================================================================ test session starts =================================================================================
platform win32 -- Python 3.12.4, pytest-8.3.2, pluggy-1.5.0
rootdir: C:\Users\wanne\OneDrive\Personal\GitHub\pyvz2
collected 437 items
alpha\src\jsonyx\test\test_jsonyx.py E
======================================================================================= ERRORS =======================================================================================
____________________________________________________________________ ERROR at setup of test_duplicate_key[cjson] _____________________________________________________________________
request = <SubRequest 'json' for <Function test_duplicate_key[cjson]>>
@pytest.fixture(params=[cjson, pyjson], ids=["cjson", "pyjson"], name="json")
def get_json(request: pytest.FixtureRequest) -> ModuleType:
"""Get JSON module."""
json: ModuleType | None = request.param
if json is None:
pytest.xfail("module unavailable")
> pytest.fail("module unavailable")
E Failed: module unavailable
alpha\src\jsonyx\test\__init__.py:30: Failed
============================================================================== short test summary info ===============================================================================
ERROR alpha/src/jsonyx/test/test_jsonyx.py::test_duplicate_key[cjson] - Failed: module unavailable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================================================== 1 error in 0.50s ================================================================================== |
Is there any means that allow me to make pytest fail if any tests are skipped (we use test skipping mainly for not installed dependencies). I couldn't find anything when googling.
The text was updated successfully, but these errors were encountered: