Skip to content

Test with parametrize items ; how to repeat but launch fixture only between batch of test? #48

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ggrelet opened this issue Sep 22, 2020 · 15 comments

Comments

@ggrelet
Copy link

ggrelet commented Sep 22, 2020

I have a simple test with parametrize and fixture:

@pytest.mark.repeat(3)
@pytest.parametrize("case", ["a","b","c"])
def test_1(case, my_fixture):
    print("test_1 case: {}".format(case))

@pytest.fixture(scope="?")
def my_fixture():
    yield # Executed at the end of the test
    print("my_fixture")

Which pytest-repeat scope and pytest scope should I use if I want the following output?

test_1 case: a
test_1 case: b
test_1 case: c
my_fixture
test_1 case: a
test_1 case: b
test_1 case: c
my_fixture
test_1 case: a
test_1 case: b
test_1 case: c
my_fixture

So far I managed to have the following:

1- With pytest-repeat's scope="session" and pytest's scope="session"

test_1 case: a
test_1 case: b
test_1 case: c
test_1 case: a
test_1 case: b
test_1 case: c
test_1 case: a
test_1 case: b
test_1 case: c
my_fixture

2- With pytest-repeat's scope="session" and pytest's scope="function"

test_1 case: a
my_fixture
test_1 case: b
my_fixture
test_1 case: c
my_fixture
test_1 case: a
my_fixture
test_1 case: b
my_fixture
test_1 case: c
my_fixture
test_1 case: a
my_fixture
test_1 case: b
my_fixture
test_1 case: c
my_fixture

Thanks for your help.

@RonnyPfannschmidt
Copy link
Member

That's currently not possible

@ggrelet
Copy link
Author

ggrelet commented Sep 22, 2020

Thank you for your prompt answer. I'll leave you to decide whether to close or let this issue open.

@cladmi
Copy link

cladmi commented Dec 9, 2022

Digging this one out.

A way to trigger this, I think, would be to add a __pytest_repeat_step_number argument to my_fixture. This way, each time its recreated, the fixture would be re-created too.

This could be provided as a non __ name prefixed fixture to make it an official way.
This could also answer the issues of "how can I get the iteration number".
=> Just get the fixture.

Regarding plugin implementation, there is maybe a way to, from outside, add an argument to a fixture in the same way as its done for the functions metafunc.fixturenames.append

Could be interesting to have this for cases where my_fixture has side-effects that need to be repeated.

@gogobera
Copy link

gogobera commented Dec 18, 2022

I have a similar request as @ggrelet . I have a time-expensive configuration procedure for my system under test, and it seemed appropriate to execute this configuration in a session-scoped fixture, say

@pytest.fixture(scope="session")
def expensive_configuration():
    important_results = do_the_things()
    return important_results

This is exactly what I think @cladmi is thinking about when they say "cases where my_fixture has side-effects that need to be repeated."

The fixture works well, for a single pass through the tests: Any subset of tests can be run in any order, and as they all declare config_fixture as a requirement (i.e., def test_one_of_many(expensive_configuration)), the configuration procedure will execute once before the first test that needs it, and the results which are collected are available to any test that runs after the first, without needing to re-execute the configuration.

However, in trying to use pytest-repeat, the issue becomes that any tests that rely on information recorded during the configuration would be run against that same recorded information, unless I can re-execute the fixture code.

Trying @cladmi's suggestion above seemed promising; however, it results (unsurprising in hindsight) in an error:
"ScopeMismatch: You tried to access the 'function' scoped fixture '__pytest_repeat_step_number' with a 'session' scoped request object" from code like

@pytest.fixture(scope="session")
def expensive_configuration(__pytest_repeat_step_number):
    important_results = do_the_things()
    return important_results

and, of course, it always feels wrong (one might say, is wrong?) to leverage double-underscored objects.

Then again, I might be misinterpreting the suggestion or overlooking something else obvious. Please correct me if I've missed something!

I somewhat wonder if the idea is at odds with the way pytest-repeat generates repeated tests. It seems like the suite really has no idea that it's performing repeated tests, but only that there are whole bunch of tests now, with unique names, and they all need to be executed. I'm no expert here, though, and I'd love to discover or develop a solution to this problem. Will post back here if I can think of anything.

@cladmi
Copy link

cladmi commented Dec 18, 2022

May bad, you are correct, as __pytest_repeat_step_number fixture does not have any scope set in the declaration, it will have a function scope.

One solution, with a code change in the plugin, would be to declare it with a dynamic scope.
https://docs.pytest.org/en/7.1.x/how-to/fixtures.html#dynamic-scope

def _scope(fixture_name, config):
    return config.option.repeat_scope


@pytest.fixture(scope=_scope)
def __pytest_repeat_step_number(request):

Then __pytest_repeat_step_number can be used as a dependency to the session scoped fixture.
And it works for me as expected.

Regarding the current implementation on how it is done, the fixture is enabled as if it was declared autouse=True, but indirectly because of side-effects c74dd59

It is still the main way of duplicating tests, it is like a global pytest.mark.parametrize( that creates each iteration.

@gogobera
Copy link

gogobera commented Dec 18, 2022

I have a work-around in mind. It suits my needs a bit better than, perhaps, the needs of the OP, because I care about triggering when a test requires a new configuration, while they seem to be looking for cleanup once any particular parameterization is complete. Without more specific description of the problem, I don't think I could give a better answer, but I think this technique is fairly flexible (or a janky hack? (-: ) and could be tweaked to probably satisfy the needs of the OP's situation, even if it's not the cleanest.

With this code

import pytest 

@pytest.mark.repeat(3)
@pytest.mark.parametrize("case", ["a","b","c"])
def test_1(case, my_fixture):
    print("test_1 case: {}".format(case))

@pytest.fixture(scope="function")
def my_fixture():
    yield # Executed at the end of the test
    print("\nmy_fixture")

I was able to essentially recreate the behavior described by the OP.

With this code, I was able to mock the desired behavior.

# scope_issues/scope_test.py
import pytest 

@pytest.mark.repeat(3)
@pytest.mark.parametrize("case", ["a","b","c"])
def test_1(case, my_fixture):
    print("test_1 case: {}".format(case))

seen_counts = set()

@pytest.fixture(scope="function")
def my_fixture(request, final_cleanup):
    global seen_counts
    test_count = count_from_name(request)
    if not seen_counts or test_count in seen_counts:
        seen_counts.add(test_count)
        # mocking "session" scope by bailing at this point
        # Either the first round or a repeat, so we do nothing.
        return
    else:
        # This is the first of a new parameterization, and
        # we should clean up from the last one.
        seen_counts.add(test_count)
        important_work()


@pytest.fixture(scope="session")
def final_cleanup():
    yield
    important_work()


def important_work():
    print("\nmy_fixture")


def count_from_name(request) -> str:
    test_name = request.node.name
    test_param_suffix = test_name.split("[")[-1].strip("]")
    test_count = "-".join(test_param_suffix.split("-")[-2:])
    # Test count is something like "3-5"
    return test_count

Thoughts?

@gogobera
Copy link

Output from hacky code:

$ python3 -m pytest -s scope_issues/scope_test.py --repeat-scope="session"
==== test session starts ====
platform linux -- Python 3.7.10, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: <my root dir>, configfile: pytest.ini
plugins: repeat-0.9.1
collected 9 items                                                                                                                                                                                                                               

scope_issues/scope_test.py::test_1[a-1-3] test_1 case: a
PASSED
scope_issues/scope_test.py::test_1[b-1-3] test_1 case: b
PASSED
scope_issues/scope_test.py::test_1[c-1-3] test_1 case: c
PASSED
scope_issues/scope_test.py::test_1[a-2-3] 
my_fixture
test_1 case: a
PASSED
scope_issues/scope_test.py::test_1[b-2-3] test_1 case: b
PASSED
scope_issues/scope_test.py::test_1[c-2-3] test_1 case: c
PASSED
scope_issues/scope_test.py::test_1[a-3-3] 
my_fixture
test_1 case: a
PASSED
scope_issues/scope_test.py::test_1[b-3-3] test_1 case: b
PASSED
scope_issues/scope_test.py::test_1[c-3-3] test_1 case: c
PASSED
my_fixture


===== 9 passed in 0.04s ====

@gogobera
Copy link

gogobera commented Dec 18, 2022

I do like @cladmi's idea of scoping __pytest_repeat_step_number better than what I've done. I'm not sure there's any reason to not adopt that change in the code.

I don't think I, personally, will be able to use any plugin code change for my project, and I'm relatively happy with how I can flexibly handle things this way.

(Test fixture could be tightened:)

@pytest.fixture(scope="function")
def my_fixture(request, final_cleanup):
    global seen_counts
    test_count = count_from_name(request)
    if seen_counts and test_count not in seen_counts:
        # We've seen test code, but this is new test code,
        # and we need to clean up before it executes.
        important_work()
    seen_counts.add(test_count)

@okken
Copy link
Contributor

okken commented Oct 6, 2023

I believe this would be solved by a missing scopes of pytest fixtures, namely "parametrize" scope.

@okken
Copy link
Contributor

okken commented Dec 27, 2023

this really requires a change to pytest, so closing the issue

@okken okken closed this as completed Dec 27, 2023
@RonnyPfannschmidt
Copy link
Member

@okken the function definition scope already exists, the refactoring to have it Part of the collection tree is missing

@okken
Copy link
Contributor

okken commented Dec 27, 2023

@RonnyPfannschmidt That's cool. Is that planned to be added to the collection tree anytime soon-ish? like 2024 sometime? or a "maybe someday"?

@RonnyPfannschmidt
Copy link
Member

Currently it's a maybe sometime, I hope we can create a working version at the Sprint

Unfortunately function is quite the spaghetti monster, once the ctor refactoring is complete it as should be simpler

@RonnyPfannschmidt
Copy link
Member

I was hoping to get some Experiments this week but I'm completely down with a bad cold/flu and chills

@okken
Copy link
Contributor

okken commented Dec 27, 2023

Well. I'm glad it's on the radar. Also, sorry you're feeling crummy.
I caught a mild something, but already on the rebound.
I did get a chance over the holidays to try to add xfail tracebacks and xpass output to -rXx. pytest 11735
I want to try to merge it with pytest 11574 so sturmf gets some contribution credit, but I'm definitely not a github pr juggling expert

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants