-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add artifacts upload for the performance tests #48243
Conversation
Size Change: -1.17 kB (0%) Total Size: 1.33 MB
ℹ️ View Unchanged
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So is the artifacts already being generated but we are just not uploading them? Wouldn't it be easier to do a simple mv
command in the performance.yml
as a step before uploading the artifact?
mv artifacts __test-results/artifacts
Flaky tests detected in bbdd33c. 🔍 Workflow run URL: https://github.com/WordPress/gutenberg/actions/runs/4292982546
|
IIUC, it wouldn't work because we create temp folders for each branch that we test against. Those paths are unavailable from the |
); | ||
|
||
throw error; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why should we only copy these if we fail? would we not want to upload them unconditionally so we can examine them even when the tests pass?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK artifacts are not created for successful tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aha, hm. I guess none of this would fail if we run it unconditionally? you're saying that the test runner doesn't create artifacts when the tests all pass?
just asking here because I've found it useful to analyze successful test results (e.g. the performance traces), but if they're not present then I guess there's no point.
the try/catch
here is what holds me up. note that we're catching a lot of cases here, so for example, if the tests pass but curateResults
fails, we'll still be copying the absence of files.
this is fine as it is, but I think it'd also be fine to leave the try/catch
out of this and blanket copy all the artifacts, even if they don't exist. do you think there's a reason not to do that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're saying that the test runner doesn't create artifacts when the tests all pass?
The artifacts I'm trying to handle here are test failure ones, which include a page screenshot and HTML snapshot (at the moment of failure). This is different from the performance results json that we create only if the tests pass.
the try/catch here is what holds me up. note that we're catching a lot of cases here, so for example, if the tests pass but curateResults fails, we'll still be copying the absence of files.
this is fine as it is, but I think it'd also be fine to leave the try/catch out of this and blanket copy all the artifacts, even if they don't exist. do you think there's a reason not to do that?
We need this try/catch block here because otherwise, it would never get to the part where we copy the created artifacts. I agree, though, that it would be better only to wrap the bit where we run the tests (npm run test:performance ...
). I'll make an update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use the same naming as we do in the E2E jobs.
39d517f
to
fcf67c6
Compare
This way, regardless which comparison part fails, we'll always be able to check for artifacts if they exist.
This reverts commit 4f32c7a.
@dmsnell this one's ready for another go! 🙌 |
script: | | ||
const commit_details = await github.rest.git.getCommit({owner: context.repo.owner, repo: context.repo.repo, commit_sha: context.sha}); | ||
return parseInt((new Date( commit_details.data.author.date ).getTime() / 1000).toFixed(0)) | ||
|
||
- name: Publish performance results | ||
if: github.event_name == 'push' | ||
env: | ||
COMMITTED_AT: ${{ steps.commit-timestamp.outputs.result }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's a mismatch between "COMMITTED_AT" and "commit_details.data.author.data" whereby the author can choose a time for the commit but the actual time of commit is different.
we should also be able to skip the network call from JS and use the commit sha directly with git
. it looks like we're trying to get the seconds since Jan. 1, 1970?
-name: Get commit timestamp
if: github.event_name == 'push'
id: commit-timestamp
script: |
# Print the commit timestamp in Unix epoch seconds
git show -s '%ct' $GITHUB_SHA
In this case %at
shows the "author time" while %ct
shows the "commit time." For common work in this repo the difference is largely when did the author last finish writing the code and when was the code merged into the project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, PR here #48673
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @WunderBart - I left a comment about the git
timestamp, and I see you are mostly just moving it instead of changing it, so I don't think it should block this PR. Still, looks like we have needless JS and network calls in there and might be reporting a different value than we indicate we are.
What?
Upload artifacts for the performance tests.
Why?
So that it's easier to debug them.
How?
Add the same action as for E2E tests artifacts upload.
Testing Instructions
CI should upload perf test artifacts when it fails. This commit shows that it works as expected.