Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark unit test as flaky #770

Merged
merged 2 commits into from
Aug 20, 2019
Merged

Mark unit test as flaky #770

merged 2 commits into from
Aug 20, 2019

Conversation

mfeurer
Copy link
Collaborator

@mfeurer mfeurer commented Aug 20, 2019

This PR marks one constantly failing unit test as flaky which often fails under high test server load (i.e. multiple travis runs at the same time). Also, it install the flaky package which is a plugin to pytest to actually rerun the marked unit test.

@codecov-io
Copy link

codecov-io commented Aug 20, 2019

Codecov Report

Merging #770 into develop will increase coverage by 0.12%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #770      +/-   ##
===========================================
+ Coverage    87.57%   87.69%   +0.12%     
===========================================
  Files           36       36              
  Lines         4072     4072              
===========================================
+ Hits          3566     3571       +5     
+ Misses         506      501       -5
Impacted Files Coverage Δ
openml/runs/functions.py 82.35% <0%> (+0.56%) ⬆️
openml/exceptions.py 93.75% <0%> (+9.37%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4a6c980...f7f8e6f. Read the comment docs.

@mfeurer mfeurer requested a review from PGijsbers August 20, 2019 10:09
@PGijsbers
Copy link
Collaborator

PGijsbers commented Aug 20, 2019

I'm sorry, I am confused about this.. The test was already marked as flaky? What does it mean to use the default value for rerun? I don't see a reference in the flaky readme.

@mfeurer
Copy link
Collaborator Author

mfeurer commented Aug 20, 2019

The test was already marked as flaky?

One of them, the other one is newly marked. I think nothing happened though because the flaky plugin was not yet installed

What does it mean to use the default value for rerun

There exists no rerun attribute. There's only something called max_runs, which by default is set to two.

All in all, I think this PR is an improvement over the previous state. If the unit tests continue to fail, we can increase the number of retries even further.

@PGijsbers
Copy link
Collaborator

To make sure I understand it right, what you are saying is that the previous @pytest.mark.flaky(reruns=3) had no effect since the package was not installed, and failed silently somehow?

@mfeurer
Copy link
Collaborator Author

mfeurer commented Aug 20, 2019

From my understanding of the pytest docs, pytest by itself does not provide functionality to rerun flaky tests by itself. Therefore, I concluded that it did not do anything and we need the flaky plugin.

@PGijsbers
Copy link
Collaborator

I'll trust your judgement on that. Seems weird but also probably correct.

@PGijsbers PGijsbers merged commit 3d08c2d into develop Aug 20, 2019
@PGijsbers PGijsbers deleted the add_flaky branch August 20, 2019 12:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants