-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] Added cleaning steps for cashed fixture result and failed setup states. #66
Conversation
…cover the fixed cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@OlegKuzovkov Thank you for your contribution. I added some comments and hope you can address them. Additionally this should find an entry in the changelog and a short mention in the readme, as it seems now possible to have the rerun on class scoped fixtures.
pytest_rerunfailures.py
Outdated
delattr(fixture_def, cached_result) | ||
|
||
|
||
def _remove_failed_setup_state_from_session(item): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a little bit reluctant to introduce cleanup login in the plugin. This could lead to too much coupling. As the SetupState
is not too complicated, it looks like https://github.com/pytest-dev/pytest/blob/master/src/_pytest/runner.py#L331 is actually doing the job already. Would that be a possibility?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, your advice is to use SetupState.teardown_all() method, instead of a custom cleaner that I've implemented.
I've tried to use it, but it seems not handling the resource properly. All it does is calling all the teadowns
for each col
elements in the stack, but it is missing the most important part: invalidation/removing the failed execution result stored in _prepare_exc
field. That's why every new test that is relying on setup_class
will rerun it anyway, but it shouldn't because setup was already successfully re-executed after the first attempt.
You can see the problem by changing the body of _remove_failed_setup_state_from_session
function to:
setup_state = getattr(item.session, '_setupstate')
setup_state.teardown_all()
And running the test:
class TestFoo(object):
execution_number = 1
srr = None
@classmethod
def setup_class(cls):
cls.srr = Random.rand_char_string(10)
Log.info('IN SETUP %s' % cls.srr)
if cls.execution_number:
cls.execution_number -= 1
assert False
assert True
@mark.parametrize('param', [1, 2, 3])
def test_pass(self, param):
Log.info('IN TEST %s %s' % (param, self.srr))
assert param
The output will be:
R2018-09-14 11:12:27,314 IN SETUP XWnfETJrmi
2018-09-14 11:12:27,568 IN SETUP ArdcndHZkT
.2018-09-14 11:12:27,569 IN TEST 1 ArdcndHZkT
R2018-09-14 11:12:27,617 IN SETUP GbnwDwvSWi
.2018-09-14 11:12:27,618 IN TEST 2 GbnwDwvSWi
R2018-09-14 11:12:27,642 IN SETUP tdWVHVPPyi
.2018-09-14 11:12:27,643 IN TEST 3 tdWVHVPPyi
I do agree with you though, that instead of cleaning stack by myself, I should've used teardown_all().
With this change the output of the previous test is correct:
R2018-09-14 11:13:36,147 IN SETUP biCSDlQwSP
2018-09-14 11:13:36,303 IN SETUP ZKlZjHlYDB
.2018-09-14 11:13:36,304 IN TEST 1 ZKlZjHlYDB
.2018-09-14 11:13:36,308 IN TEST 2 ZKlZjHlYDB
.2018-09-14 11:13:36,320 IN TEST 3 ZKlZjHlYDB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Further investigation showed that we cannot use .teardown_all()
because it is triggering all the finalizers, even for the module/session scoped fixtures. Thus, leaving original implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that is true. So I am happy with keeping the cleanup here. At this point a comment could be helpful.
pytest_rerunfailures.py
Outdated
@@ -160,6 +185,10 @@ def pytest_runtest_protocol(item, nextitem): | |||
# will rerun test, log intermediate result | |||
item.ihook.pytest_runtest_logreport(report=report) | |||
|
|||
# cleanin item's cashed results from any level of setups | |||
_remove_cached_results_from_fixtures(item) | |||
_remove_failed_setup_state_from_session(item) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I am not sure about is the effect for the result of a module or session scoped fixture. Could you please add a test for them as well, that they are not affected by this change? I guess this should not re-initiate those fixtures as they could be expensive database fixtures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This indeed will invalidate all the fixture cached_results. At this point I have a question for you:
If the test failed, do we want to rerun all fixtures or just the one that failed?
I've implemented this way on purpose because I think that if something goes wrong, It is better to reinitialize all the dependencies. But now I have some doubts about it, because of the reason you've mentioned.
Based on your answer I will change the implementation accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not like to re-initialize all fixtures, but only the one who failed. So that should be the case with the current state of the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is correct. The only failed fixture will be erased from any previous results.
test_pytest_rerunfailures.py
Outdated
assert_outcomes(result, passed=3, rerun=1) | ||
|
||
|
||
def test_rerun_on_class_scope_fixture_setup_with_first_execution_error_for_parametrized_test(testdir): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function names of these tests are quite long. Could you please shorten them and write the more verbose description in a docstring? This reminds me to add a flake8 test to the tox run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will be really appreciated if you could suggest me the names for these cases. I really cannot find any clearer and shorter test names for this specific contexts :)
Thank you for your feedback. I've tried to change/fix everything you've mentioned. If you find anything else - let me know. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be fine now. If you find time, you could maybe consider the comments on test. As I am not connected very well this week, I plan to merge and release the changes next week.
pytest_rerunfailures.py
Outdated
delattr(fixture_def, cached_result) | ||
result, cache_key, err = getattr(fixture_def, cached_result) | ||
if err: # Deleting cached results for only failed fixtures | ||
delattr(fixture_def, cached_result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the test for an error. Hopefully we will not have a chain of dependent fixtures and strange results, but then the dependent fixture should also produce an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added the test for the error (test_rerun_on_module_fixture_with_reruns). Only setup_fixture in that scenario will fail and will be cleaned up. Dependent fixture cannot produce any error if one of the fixtures before fails.
Unless I am missing something.
pytest_rerunfailures.py
Outdated
delattr(fixture_def, cached_result) | ||
|
||
|
||
def _remove_failed_setup_state_from_session(item): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that is true. So I am happy with keeping the cleanup here. At this point a comment could be helpful.
pytest_rerunfailures.py
Outdated
@@ -160,6 +185,10 @@ def pytest_runtest_protocol(item, nextitem): | |||
# will rerun test, log intermediate result | |||
item.ihook.pytest_runtest_logreport(report=report) | |||
|
|||
# cleanin item's cashed results from any level of setups | |||
_remove_cached_results_from_fixtures(item) | |||
_remove_failed_setup_state_from_session(item) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not like to re-initialize all fixtures, but only the one who failed. So that should be the case with the current state of the PR.
test_pytest_rerunfailures.py
Outdated
def test_rerun_on_class_scope_fixture_with_error_with_reruns(testdir): | ||
""" | ||
Case: Class scope fixture throwing error on the first execution for parametrized test | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for rephrasing the function names.
test_pytest_rerunfailures.py
Outdated
import pytest | ||
|
||
class Execution: | ||
count = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like those two tests and I see that you have this pattern of execution number, but I seems more pythonic to me to use a global boolean in this case instead of a singleton class to store it. But that is on you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the suggestion. Will redo this.
…class with static field according to PR comment.
# Conflicts: # CHANGES.rst # test_pytest_rerunfailures.py
fixes #64 issue for: