Skip to content

Session teardown not executed when test is skipped in fixture #209

@gopiotr

Description

@gopiotr

Hi,
After merging those two PRs: #205, #207, I observed some bug connected with executing session teardown in my tests. For some important reasons sometimes I need to "dynamically" skip tests in setup phase in test fixture. Last changes caused, that when this skip is performed in one fixture, then no teardown is applied in another one (with session scope). For better understanding of this problem I prepared this simple demo:

import pytest

@pytest.fixture(scope='session')
def first():
    print("\n---session_setup---")
    yield
    print("\n---session_teardown---")

@pytest.fixture(scope='function')
def second():
    pytest.skip('fixture skip')
    yield

def test_a(first, second):
    pass

Then when I run this test with command:

pytest -vs

I get following and expected output (session teardown is applied correctly):

test_reruns_skip.py::test_a 
---session_setup---
SKIPPED (fixture skip)
---session_teardown---

But when I run the same test with reruns in command:

pytest -vs --reruns 1

I get following wrong output (session teardown is not applied):

test_reruns_skip.py::test_a 
---session_setup---
SKIPPED (fixture skip)

Source of problem

After analyzing of last changes I discovered, that the problem comes from this fragment:

@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if call.when == "call":
item.test_failed = result.failed

item.test_failed status is saved only when call phase is executed. In my example this fragment is never reached and as a result later in pytest_runtest_teardown function this situation is wrongly considered as this one which should be rerun due to the lack of item.test_failed attribute:
# teardown when test not failed or rerun limit exceeded
if item.execution_count > reruns or getattr(item, "test_failed", None) is False:
item.teardown()
else:
# clean cashed results from any level of setups

Suggestions for solving the problem

The simplest way for solving above problem is just setting item.test_failed attribute every time, when pytest_runtest_makereport hook is called, like this:

@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
    outcome = yield
    result = outcome.get_result()
    item.test_failed = result.failed

But I'm not an expert in rerunfailures plugin (and pytest neither), so I don't know if it is safe solution.

More complex (but probably more safer) solution can be like this:

def pytest_runtest_teardown(item, nextitem):
    ...
    # teardown when test not failed or rerun limit exceeded
    _test_failed_statuses = getattr(item, "_test_failed_statuses", {})
    if item.execution_count > reruns or not any(_test_failed_statuses.values()):
        item.teardown()
    else:
        # clean cashed results from any level of setups
        ...

@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
    outcome = yield
    result = outcome.get_result()
    if result.when == 'setup':
        # clean failed statuses at the beginning of each test/rerun
        setattr(item, '_test_failed_statuses', {})
    _test_failed_statuses = getattr(item, '_test_failed_statuses', {})
    _test_failed_statuses[result.when] = result.failed
    item._test_failed_statuses = _test_failed_statuses

The problem is that both solutions cause failures in unit tests, and I have no idea how to solve my problem in better way, or how to fix those tests.

@lukasNebr @icemac could you help me with this problem or could you propose better solution?

I'm working with:
Python 3.8.5
pytest-7.2.0
rerunfailures-11.2.dev0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions