Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add artifacts upload for the performance tests #48243

Merged
merged 9 commits into from
Mar 2, 2023
27 changes: 18 additions & 9 deletions .github/workflows/performance.yml
Original file line number Diff line number Diff line change
Expand Up @@ -86,15 +86,6 @@ jobs:
WP_MAJOR="${WP_VERSION_ARRAY[0]}.${WP_VERSION_ARRAY[1]}"
./bin/plugin/cli.js perf $GITHUB_SHA debd225d007f4e441ceec80fbd6fa96653f94737 --tests-branch $GITHUB_SHA --wp-version "$WP_MAJOR"

- uses: actions/github-script@98814c53be79b1d30f795b907e553d8679345975 # v6.4.0
if: github.event_name == 'push'
id: commit-timestamp
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
const commit_details = await github.rest.git.getCommit({owner: context.repo.owner, repo: context.repo.repo, commit_sha: context.sha});
return parseInt((new Date( commit_details.data.author.date ).getTime() / 1000).toFixed(0))

- name: Compare performance with custom branches
if: github.event_name == 'workflow_dispatch'
env:
Expand All @@ -103,9 +94,27 @@ jobs:
run: |
./bin/plugin/cli.js perf $(echo $BRANCHES | tr ',' ' ') --tests-branch $GITHUB_SHA --wp-version "$WP_VERSION"

- name: Get commit timestamp
uses: actions/github-script@98814c53be79b1d30f795b907e553d8679345975 # v6.4.0
if: github.event_name == 'push'
id: commit-timestamp
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
const commit_details = await github.rest.git.getCommit({owner: context.repo.owner, repo: context.repo.repo, commit_sha: context.sha});
return parseInt((new Date( commit_details.data.author.date ).getTime() / 1000).toFixed(0))

- name: Publish performance results
if: github.event_name == 'push'
env:
COMMITTED_AT: ${{ steps.commit-timestamp.outputs.result }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's a mismatch between "COMMITTED_AT" and "commit_details.data.author.data" whereby the author can choose a time for the commit but the actual time of commit is different.

we should also be able to skip the network call from JS and use the commit sha directly with git. it looks like we're trying to get the seconds since Jan. 1, 1970?

-name: Get commit timestamp
 if: github.event_name == 'push'
 id: commit-timestamp
 script: |
    # Print the commit timestamp in Unix epoch seconds
    git show -s '%ct' $GITHUB_SHA

In this case %at shows the "author time" while %ct shows the "commit time." For common work in this repo the difference is largely when did the author last finish writing the code and when was the code merged into the project.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, PR here #48673

CODEHEALTH_PROJECT_TOKEN: ${{ secrets.CODEHEALTH_PROJECT_TOKEN }}
run: ./bin/log-performance-results.js $CODEHEALTH_PROJECT_TOKEN trunk $GITHUB_SHA debd225d007f4e441ceec80fbd6fa96653f94737 $COMMITTED_AT

- name: Archive debug artifacts (screenshots, HTML snapshots)
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
if: always()
with:
name: failures-artifacts
path: ./__test-results/artifacts
if-no-files-found: ignore
23 changes: 19 additions & 4 deletions bin/plugin/commands/performance.js
Original file line number Diff line number Diff line change
Expand Up @@ -182,10 +182,24 @@ function curateResults( testSuite, results ) {
* @return {Promise<WPPerformanceResults>} Performance results for the branch.
*/
async function runTestSuite( testSuite, performanceTestDirectory, runKey ) {
await runShellScript(
`npm run test:performance -- packages/e2e-tests/specs/performance/${ testSuite }.test.js`,
performanceTestDirectory
);
try {
await runShellScript(
`npm run test:performance -- packages/e2e-tests/specs/performance/${ testSuite }.test.js`,
performanceTestDirectory
);
} catch ( error ) {
fs.mkdirSync( './__test-results/artifacts', { recursive: true } );
const artifactsFolder = path.join(
performanceTestDirectory,
'artifacts/'
);
await runShellScript(
'cp -Rv ' + artifactsFolder + ' ' + './__test-results/artifacts/'
);

throw error;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why should we only copy these if we fail? would we not want to upload them unconditionally so we can examine them even when the tests pass?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK artifacts are not created for successful tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aha, hm. I guess none of this would fail if we run it unconditionally? you're saying that the test runner doesn't create artifacts when the tests all pass?

just asking here because I've found it useful to analyze successful test results (e.g. the performance traces), but if they're not present then I guess there's no point.

the try/catch here is what holds me up. note that we're catching a lot of cases here, so for example, if the tests pass but curateResults fails, we'll still be copying the absence of files.

this is fine as it is, but I think it'd also be fine to leave the try/catch out of this and blanket copy all the artifacts, even if they don't exist. do you think there's a reason not to do that?

Copy link
Member Author

@WunderBart WunderBart Feb 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're saying that the test runner doesn't create artifacts when the tests all pass?

The artifacts I'm trying to handle here are test failure ones, which include a page screenshot and HTML snapshot (at the moment of failure). This is different from the performance results json that we create only if the tests pass.

the try/catch here is what holds me up. note that we're catching a lot of cases here, so for example, if the tests pass but curateResults fails, we'll still be copying the absence of files.

this is fine as it is, but I think it'd also be fine to leave the try/catch out of this and blanket copy all the artifacts, even if they don't exist. do you think there's a reason not to do that?

We need this try/catch block here because otherwise, it would never get to the part where we copy the created artifacts. I agree, though, that it would be better only to wrap the bit where we run the tests (npm run test:performance ...). I'll make an update.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


const resultsFile = path.join(
performanceTestDirectory,
`packages/e2e-tests/specs/performance/${ testSuite }.test.results.json`
Expand All @@ -198,6 +212,7 @@ async function runTestSuite( testSuite, performanceTestDirectory, runKey ) {
`packages/e2e-tests/specs/performance/${ testSuite }.test.results.json`
)
);

return curateResults( testSuite, rawResults );
}

Expand Down