Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Processing of execution-time performance test results #11622

Merged
merged 67 commits into from
Aug 26, 2022

Conversation

alexvy86
Copy link
Contributor

@alexvy86 alexvy86 commented Aug 22, 2022

Description

  • New tool to process the json files that result from running the execution-time benchmark tests with the custom mocha reporter in @fluid-tools/benchmark.
  • New pipeline that builds the repo, runs execution-time performance tests, runs the tool from the previous bullet point to send relevant metrics from the output to one of our Kusto tenants, and publishes the test results as a pipeline artifact.
  • New npm script in @fluid-internal/tree and @fluid-experimental/tree to run the execution-time performance tests.

The pipeline will run performance tests for any package that defines a test:benchmark:report npm script. It's assumed that said script already passes all the necessary options to mocha to run the performance tests, including the reporter from @fluid-tools/benchmark to display the results in console. The release-group-level npm script will only add the output location for the reporter.

Reviewer Guidance

General approach

Chatting with @anthony-murphy , we decided to implement this as a separate pipeline and not make it additional steps in the existing Build - client packages one, mainly to keep our public and internal versions of the Build - client packages pipeline identical, and as free as possible of conditional steps. These tests only need to run in our internal pipelines so the definitions would have to diverge, or we'd need conditional logic that defines when to execute the steps for these tests.

This means the repo needs to be built "again" in this pipeline (instead of reusing the build that already happens in Build - client packages). That's one of the most expensive parts of our pipelines so it doesn't feel ideal, but it's the tradeoff we decided on right now. Future work could focus on changing that decision. Work that reduces the repository build time would also be helpful.

New npm script

The new test:benchmark:report npm script in the Tree packages is very similar to the existing bench and perf ones. @CraigMacomber , @Abe27342 do you have any thoughts on consolidating them? Some possible diverging requirements I can see:

  • cross-env FLUID_TEST_VERBOSE=1 I'd skip that for the script called during CI.
  • expose-gc is not present in the packages/dds/tree script but that might be an error.
  • Timeout values

Any relevant logs or outputs

Example of successful pipeline run

Results in Kusto:

image

AB 1410

Removes unnecessary changes from an earlier "nuclear" update of these files.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: build Build related issues area: dds Issues related to distributed data structures area: tests Tests to add, test infrastructure improvements, etc base: main PRs targeted against main branch dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants