-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perf tests: Store test run results as artifact #45747
Conversation
Open in CodeSandbox Web Editor | VS Code | VS Code Insiders |
Size Change: 0 B Total Size: 1.32 MB ℹ️ View Unchanged
|
b5e4a52
to
6a0b835
Compare
6a0b835
to
b5605f6
Compare
b5605f6
to
cca03d3
Compare
Flaky tests detected in 921e18d9ffbbd1b431fe35ff8a7c56c91c3f639e. 🔍 Workflow run URL: https://github.com/WordPress/gutenberg/actions/runs/4126349596
|
ef30b6e
to
42427be
Compare
@noisysocks were you and I talking about adding this? if not I need to find whoever it was. |
42427be
to
df6e7ae
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
8075cbb
to
d3e6110
Compare
d3e6110
to
de95080
Compare
de95080
to
e846da8
Compare
e846da8
to
bd75f0d
Compare
bd75f0d
to
921e18d
Compare
In this commit we're storing the raw results of all the performance test runs as an artifact in GitHub Actions so that we can perform more extensive analysis outside of workflow runs. For every _round_ of testing we end up with a copy of the results in a JSON file with the branch ref, the test suite name, and the test run index in its name. This makes it possible to analyze each run separately without making it hard to analyze everything together.
921e18d
to
0f177df
Compare
What?
Stores raw data collected during Performance Test CI workflow as an artifact for further external analysis.
Why?
It's hard to extract information from a test the way we currently refine that information within the test run and log it to
stdout
within the test runner.Storing the raw data gives us chance to play with the data and explore it outside of a predefined way in the code and lets us change our analysis questions without having to rerun an experiment or re-collect the data.
How?
In this commit we're storing the raw results of all the performance test
runs as an artifact in GitHub Actions so that we can perform more extensive
analysis outside of workflow runs.
For every round of testing we end up with a copy of the results in a JSON
file with the branch ref, the test suite name, and the test run index in its
name. This makes it possible to analyze each run separately without making
it hard to analyze everything together.
This is accomplished by adding a new step in the Performance Tests CI workflow config.
Testing Instructions
There are no functional or visual changes to Gutenberg in this PR. What should change is that a new artifact should appear on the Performance Tests CI workflow. Verify that those results appear, that the artifact is stored as an archive of the various test runs.
How to find results