Update slow tests marks and add script to automate #1164
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #1145
Adds a new script
src/scripts/automation/mark_slow_tests.py
which uses a JSON report of Pytest results generated usingpytest-json-report
plugin to identify slow running tests and update the decorators used to mark the tests in the source code. The script uses RedBaron to parse the modules' source code in to a lossless abstract source tree, and adds or removes the slow mark decorators to the relevant test functions identified from the test report. To avoid too much switching back and forth for tests with duration close to the threshold used to determine if a test is 'slow' (currently ~10s), a basic form of hysteresis is used with slow marks only added if measured test duration is above a upper threshold (default 11s) and only removed if test duration is below a slower threshold (default 9s). Atox
environment which install additional dependencies (pytest-json-report
andredbaron
), runspytest
with JSON report output and then runsmark_slow_tests.py
script to update the tests is also added.Updates to the slow marks based on a local run of the tests and script are also included in this PR - we could possibly separate this out in to a different PR. It would probably also be worth exploring automating running the script as an Actions workflow, potentially doing something like running on a weekly schedule and opening a PR for any changes to slow marks.