Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

non-parameterized fixtures not torn down after last use #393

Closed
pytestbot opened this issue Nov 23, 2013 · 8 comments
Closed

non-parameterized fixtures not torn down after last use #393

pytestbot opened this issue Nov 23, 2013 · 8 comments
Labels
topic: fixtures anything involving fixtures directly or indirectly type: enhancement new feature or API change, should be merged into features branch

Comments

@pytestbot
Copy link
Contributor

Originally reported by: Laurence Rowe (BitBucket: lrowe, GitHub: lrowe)


In the documentation at http://pytest.org/latest/fixture.html#automatic-grouping-of-tests-by-fixture-instances it mentions that:

pytest minimizes the number of active fixtures during test runs. If you have a parametrized fixture, then all the tests using it will first execute with one instance and then finalizers are called before the next fixture instance is created. Among other things, this eases testing of applications which create and use global state.

However standard (non-parameterized) fixtures are not torn down after last use and persist until the end of the session / module which prevents one from writing alternative fixtures setting up different global states for use in different tests.

It would be great if there was a way to mark certain fixtures as being mutually exclusive (i.e. setting up the same global state object) so that alternative global states could be setup for different tests.


@pytestbot
Copy link
Contributor Author

Original comment by Laurence Rowe (BitBucket: lrowe, GitHub: lrowe):


Another way of solving this might be to have a way to specify that a test function only wants to be run for particular paramaterized fixture instances, though that seems potentially messy.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Could you provide an example documenting the current behaviour and the behaviour you'd rather like to see?

@pytestbot
Copy link
Contributor Author

Original comment by Laurence Rowe (BitBucket: lrowe, GitHub: lrowe):


Here's an example demonstrating the current behaviour. The fixtures data1 and data2 could be anything that sets some global state, for instance loading different test datasets into a connection fixture. They're marked with scope='session' because they could take a while to setup, doing so per-test would take a long time.

conftest.py:

#!python
import pytest

@pytest.yield_fixture(scope='session')
def data1():
    print 'setup data1'
    yield
    print 'teardown data1'

@pytest.yield_fixture(scope='session')
def data2():
    print 'setup data2'
    yield
    print 'teardown data2'

test_a.py:

#!python
def test_a1(data1):
    print 'test_a1 using data1'

def test_a2(data2):
    print 'test_a2 using data2'

test_b.py:

#!python
def test_b1(data1):
    print 'test_b1 using data1'

def test_b2(data2):
    print 'test_b2 using data2'

output of $ bin/py.test tests/ -s -q:

setup data1
test_a1 using data1
.setup data2
test_a2 using data2
.test_b1 using data1
.test_b2 using data2
.teardown data2
teardown data1

What I'd like to see instead is:

setup data1
test_a1 using data1
.test_b1 using data1
.teardown data1
setup data2
test_a2 using data2
.test_b2 using data2
.teardown data2

This time only fixtures that are required for the current test are active and the tests have been re-ordered to be grouped by fixture usage to reduce the number of times fixtures need to be set up.

(This would be equivalent to the behaviour of zope.testrunner's layers: https://pypi.python.org/pypi/zope.testrunner#layers)

It might be too much of a change for other users to make active fixture minimisation the default (I would have found it less surprising, but then I was used to the zope.testrunner behaviour before.) Perhaps the ability to mark certain fixtures as being impure or effecting particular global state:

#!python
import pytest

@pytest.yield_fixture(scope='session', global_state=['db.Session'])
def data1():
    print 'setup data1'
    yield
    print 'teardown data1'

@pytest.yield_fixture(scope='session', global_state=['db.Session'])
def data2():
    print 'setup data2'
    yield
    print 'teardown data2'

That way no two fixtures marked as effecting the same global state would be active concurently.

@pytestbot
Copy link
Contributor Author

Original comment by holger krekel (BitBucket: hpk42, GitHub: hpk42):


Thanks for the example! I'd think you may also mark "data1" and "data2" to never be "active" at the same time. This would also make it an error to try to use data1 and data2 in the same test. I guess your notation achieves this more indirectly.

I am not sure ATM how easy this is to implement, to be honest.

@pytestbot
Copy link
Contributor Author

Original comment by BitBucket: vaab, GitHub: vaab:


Hi, coming also from zope.testrunner, and having the same issue. I can give a real world scenario :

I have an application that supports addons.
Tests can require 2 fixtures: that the application is launched (fixture 1) and that the addons is installed in the application (per addon adhoc fixture)

These tests are not located anywhere specific, for instance some addon could add some tests to another addon. There are no relation to python packages nor directory. So tests requiring the same addon fixture could be anywhere in the collected test of pytest.

Following the documentation, I would expect:


setup 'application installed' fixture
  setup 'addon A installed' fixture
     TEST all tests that depends on fixture A  
  teardown 'addon A installed' fixture
  setup 'addon B installed' fixture
     TEST all tests that depends on fixture B  
  teardown 'addon B installed' fixture
teardown 'application installed' fixture

But, this is what we currently get:

setup 'application installed' fixture
  setup 'addon A installed' fixture
     TEST all tests that depends on fixture A  
     setup 'addon B installed' fixture
        TEST all tests that depends on fixture B  
     teardown 'addon B installed' fixture
  teardown 'addon A installed' fixture
teardown 'application installed' fixture

In my scenario, actual tests gets an environment they didn't ask for: For instance, the tests on 'addon B' will have the fixture 'addon A installed' active and they didn't ask for it. This will change and depend if I select test A to be run, or the order of the tests changes. More over, I could really want to make some test with both addon fixtures, and this last case should definitively be different.
Not to mention if 'fixture addon A' and 'fixture addon B' are incompatible ! (and this is often the case in the given scenario).

In the current implementation: you don't garantee that only the fixtures asked for are actually setup before running the test... other unrelated fixture could or couldn't be setup depending on test order, test selection ... isn't that a flaw ?

Why not simply teardown a fixture once there are no more tests that will use it ?
Do you have scenarios where the actual behavior makes sense ?

I don't think we need to "mark" incompatible fixture. Just ensuring that a test gets only the fixture it asked for seems saner to me. I'm probably missing some obvious information why it wasn't already implemented in pytest, and I apologies in advance for my blindness. I would be very happy to get the correct pointer to understand the current choices on that topic. Thanks !

@pytestbot pytestbot added the type: enhancement new feature or API change, should be merged into features branch label Jun 15, 2015
@oscarh
Copy link
Contributor

oscarh commented Nov 9, 2016

Hi, I think this is related to #687, and I'm also interested in having the fixture (at least module scope) being torn down when they're not needed any more.

fkohlgrueber pushed a commit to fkohlgrueber/pytest that referenced this issue Oct 27, 2018
@tolomea
Copy link

tolomea commented Jun 3, 2019

This behaviour also surprised me. As I'd hoped to use this to address out of memory issues.

We have some tests across multiple modules that require pyspark which spins up an entire Java VM. We also have some other unrelated tests that are memory heavy.
Both lots of tests run fine independently.
But if you run the whole suite the pyspark tests run first and then the Java VM hangs around until the end of session.
This results in the later memory heavy tests hitting out of memory on the CI instance.

I had thought from "pytest minimizes the number of active fixtures during test runs" in the documentation that a session fixture might fix this.
I imagined the fixture would be used by all the pyspark tests and shuts down pyspark on exit.
The hope was that this would result in those tests being grouped together and the pysark java instance would be shutdown immediately after they were done.
Unfortunately that doesn't seem to work.

I could address this with a module scoped fixture but then I'd be restarting that Java VM for each of the modules that does spark stuff, which is not ideal.

@Zac-HD Zac-HD added the topic: fixtures anything involving fixtures directly or indirectly label Jul 3, 2019
@RonnyPfannschmidt
Copy link
Member

closing this for now as the design of the current scoping system makes it impossible to afely do an da major refactor that didnt manifest in the last decade precludes it

@RonnyPfannschmidt RonnyPfannschmidt closed this as not planned Won't fix, can't repro, duplicate, stale May 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: fixtures anything involving fixtures directly or indirectly type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

No branches or pull requests

5 participants