Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest.skip option to allow to work at module level #2805

Closed
nicoddemus opened this issue Sep 28, 2017 · 7 comments
Closed

pytest.skip option to allow to work at module level #2805

nicoddemus opened this issue Sep 28, 2017 · 7 comments
Labels
good first issue easy issue that is friendly to new contributor topic: selection related to test selection from the command line type: enhancement new feature or API change, should be merged into features branch

Comments

@nicoddemus
Copy link
Member

As discussed in #2338

@The-Compiler wrote


The Skipped exception already has allow_module_level=True as argument which is used by pytest.importorskip internally - so you should already be able to do raise pytest.skip.Exception("...", allow_module_level=True).

I think we should just expose that via pytest.skip(...) too.

@nicoddemus nicoddemus added Hacktoberfest good first issue easy issue that is friendly to new contributor topic: selection related to test selection from the command line type: enhancement new feature or API change, should be merged into features branch labels Sep 28, 2017
@georgeyk
Copy link
Contributor

georgeyk commented Oct 1, 2017

@nicoddemus Hi, can I try to fix this ?

I'm not familiar with the codebase, but it looks like the following:

  • update
    def skip(msg=""):
    """ skip an executing test with the given message. Note: it's usually
    better to use the pytest.mark.skipif marker to declare a test to be
    skipped under certain conditions like mismatching platforms or
    dependencies. See the pytest_skipping plugin for details.
    """
    __tracebackhide__ = True
    raise Skipped(msg=msg)
    to add allow_module_level
  • and here too:

    pytest/_pytest/skipping.py

    Lines 156 to 177 in 0a15edd

    def pytest_runtest_setup(item):
    # Check if skip or skipif are specified as pytest marks
    skipif_info = item.keywords.get('skipif')
    if isinstance(skipif_info, (MarkInfo, MarkDecorator)):
    eval_skipif = MarkEvaluator(item, 'skipif')
    if eval_skipif.istrue():
    item._evalskip = eval_skipif
    skip(eval_skipif.getexplanation())
    skip_info = item.keywords.get('skip')
    if isinstance(skip_info, (MarkInfo, MarkDecorator)):
    item._evalskip = True
    if 'reason' in skip_info.kwargs:
    skip(skip_info.kwargs['reason'])
    elif skip_info.args:
    skip(skip_info.args[0])
    else:
    skip("unconditional skip")
    item._evalxfail = MarkEvaluator(item, 'xfail')
    check_xfail_no_run(item)
  • tests and docs

Is that correct ?

@nicoddemus
Copy link
Member Author

Hi @georgeyk actually it is simpler. 😁

pytest.skip raises Skipped, and that class already supports the flag we want:

class Skipped(OutcomeException):
# XXX hackish: on 3k we fake to live in the builtins
# in order to have Skipped exception printing shorter/nicer
__module__ = 'builtins'
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
self.allow_module_level = allow_module_level

So it is just a matter of changing pytest.skip to accept a keyword argument allow_module_level and forward that to Skipped.

I would like for allow_module_level to be keyword-argument only, but given that we don't have support for that in Python 2 I would code pytest.skip like this:

def skip(msg="", **kwargs):
    """ ...
    :kwarg bool allow_module_level: allows this function to be called at the module level, skipping the 
        rest of the module. Default to False.
    """
    __tracebackhide__ = True
    allow_module_level = kwargs.pop('allow_module_level', False)
    if kwargs:
        raise TypeError('unexpected keyword arguments: {}'.format(kwargs.keys())
    raise Skipped(msg=msg, allow_module_level=allow_module_level)

And then a test and docs. 👍

@georgeyk georgeyk mentioned this issue Oct 1, 2017
4 tasks
@bilderbuchi
Copy link
Contributor

@nicoddemus the related PR for this issue is merged.
I could not find the policy regarding when to close issues, i.e. close when the fix is merged into the repo or when it is released. It probably would make sense to document that in the contributor documents.
IMHO, the former makes more sense, as closed issues are nearly as discoverable as open ones on Github, and it keeps your issues list (i.e. the list of things that still need work) a bit cleaner.

@RonnyPfannschmidt
Copy link
Member

thanks for the note - we try to close automatically with merging, but sometimes a reference is forgotten

@bilderbuchi
Copy link
Contributor

OK, thanks.

@nicoddemus
Copy link
Member Author

Our policy is to close as soon as the fix is merged either on master or features as you suggest.

What happened is that it was merged to the features branch, and GH will only handle commit commands when they are merged to master. Also sometimes people forget/don't know to add "Fix #X" to the commit message as well and we end up forgetting to close manually.

@bilderbuchi
Copy link
Contributor

bilderbuchi commented Oct 24, 2017

OK. I've used the GH API to automatically identify a handful of issues (back until ~ issue 2400) that should be closed, I'll @mention you to close them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue easy issue that is friendly to new contributor topic: selection related to test selection from the command line type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

No branches or pull requests

4 participants