You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to add testmon to my CI to use resources wisely. On my way, I thought that it might be a good time to also reduce flakiness by forcing all new tests and changes to previous tests to run several times to consider them good.
To do so, I'm using pytest-flakefinder. This plugin hooks up at the very beginning and generates N tests for each test present. This way, it we have test_foo and we pass --flake-runs=3 it will run
test_foo[0]
test_foo[1]
test_foo[2]
Example of what happens when I run:
pytest -vv --testmon --flake-finder --flake-runs=3
======================================================================================= test session starts ========================================================================================
platform linux -- Python 3.8.16, pytest-7.2.0, pluggy-1.0.0 -- /home/pablo/Tinybird/testing-real-use-cases/.e/bin/python
cachedir: .pytest_cache
testmon: new DB, environment: default
We'd like to hear from testmon users! 🙏🙏 go to https://testmon.org/survey to leave feedback ✅❌rootdir: /home/pablo/Tinybird/testing-real-use-casesplugins: testmon-1.4.2, flakefinder-1.1.0, cov-4.0.0, Faker-15.1.1collecting ...collected 9 itemstest_testmon.py::test_sum[0] PASSED [ 11%]test_testmon.py::test_sum[1] PASSED [ 22%]test_testmon.py::test_sum[2] PASSED [ 33%]test_testmon.py::test_other[0] PASSED [ 44%]test_testmon.py::test_other[1] PASSED [ 55%]test_testmon.py::test_other[2] PASSED [ 66%]test_testmon.py::test_new[0] PASSED [ 77%]test_testmon.py::test_new[1] PASSED [ 88%]test_testmon.py::test_new[2] PASSED [100%]======================================================================================== 9 passed in 0.11s =========================================================================================
Of course, this does not play well with testmon because test_foo's ID does not match with any test_foo[N]. I have an ugly workaround that I'm planning on submitting as an MR, but I'd love to hear your feedback about this to see whether it makes sense to you or not. It essentially sanitizes the names removing the [N] part that that's about it. Python doesn't allow to define methods with such characters, so I think we're safe.
The text was updated successfully, but these errors were encountered:
I want to add testmon to my CI to use resources wisely. On my way, I thought that it might be a good time to also reduce flakiness by forcing all new tests and changes to previous tests to run several times to consider them good.
To do so, I'm using pytest-flakefinder. This plugin hooks up at the very beginning and generates N tests for each test present. This way, it we have
test_foo
and we pass--flake-runs=3
it will runExample of what happens when I run:
Of course, this does not play well with testmon because
test_foo
's ID does not match with anytest_foo[N]
. I have an ugly workaround that I'm planning on submitting as an MR, but I'd love to hear your feedback about this to see whether it makes sense to you or not. It essentially sanitizes the names removing the[N]
part that that's about it. Python doesn't allow to define methods with such characters, so I think we're safe.The text was updated successfully, but these errors were encountered: