Skip to content

Commit

Permalink
Refactor benchmark script (#6376)
Browse files Browse the repository at this point in the history
* Add timer setting

* Setup benchmark code

* Setup memory benchmark

* Add compare function

* Add result preview

* Setup results preview

* Simplify script for CI

* Update CI

* Cleanup

* Temp remove fork guard

* Fix stuff

* Fix again

* Fix quotes

* Fix multiline output

* Simplify title

* Fix memory numbers

* Remove astro bin dir

* Fix gc

* Add repo guards

* Fix wrong call

* Set max space size

* Remove guard

* Bump memory a bit

* Organize neatly

* Update readme

* Try large md

* Try no gc

* Revert markdown and gc changes

* Test sha

* Try ref

* Try 128mb

* Set 256

* Add guard

* Apply suggestions from code review

Co-authored-by: Sarah Rainsberger <sarah@rainsberger.ca>

* Add docs comment

---------

Co-authored-by: Sarah Rainsberger <sarah@rainsberger.ca>
  • Loading branch information
bluwy and sarah11918 authored Mar 1, 2023
1 parent 045262e commit f493794
Show file tree
Hide file tree
Showing 25 changed files with 719 additions and 37 deletions.
6 changes: 6 additions & 0 deletions .eslintrc.cjs
Original file line number Diff line number Diff line change
Expand Up @@ -38,5 +38,11 @@ module.exports = {
'no-console': ['error', { allow: ['warn', 'error', 'info', 'debug'] }],
},
},
{
files: ['benchmark/**/*.js'],
rules: {
'no-console': 'off',
},
},
],
};
50 changes: 25 additions & 25 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,19 +16,14 @@ jobs:
permissions:
contents: read
outputs:
PR-BENCH-16: ${{ steps.benchmark-pr.outputs.BENCH_RESULT16 }}
PR-BENCH-18: ${{ steps.benchmark-pr.outputs.BENCH_RESULT18 }}
MAIN-BENCH-16: ${{ steps.benchmark-main.outputs.BENCH_RESULT16 }}
MAIN-BENCH-18: ${{ steps.benchmark-main.outputs.BENCH_RESULT18 }}
strategy:
matrix:
node-version: [16, 18]
PR-BENCH: ${{ steps.benchmark-pr.outputs.BENCH_RESULT }}
MAIN-BENCH: ${{ steps.benchmark-main.outputs.BENCH_RESULT }}
steps:
# https://github.com/actions/checkout/issues/331#issuecomment-1438220926
- uses: actions/checkout@v3
with:
persist-credentials: false
ref: ${{github.event.pull_request.head.sha}}
repository: ${{github.event.pull_request.head.repo.full_name}}
ref: refs/pull/${{ github.event.issue.number }}/head

- name: Setup PNPM
uses: pnpm/action-setup@v2
Expand All @@ -45,13 +40,22 @@ jobs:
- name: Build Packages
run: pnpm run build

- name: Get bench command
id: bench-command
run: |
benchcmd=$(echo "${{ github.event.comment.body }}" | grep '!bench' | awk -F ' ' '{print $2}')
echo "bench=$benchcmd" >> $GITHUB_OUTPUT
shell: bash

- name: Run benchmark
id: benchmark-pr
run: |
pnpm run --silent benchmark 2> ./bench-result.md
result=$(awk '/requests in/' ./bench-result.md)
echo "::set-output name=BENCH_RESULT${{matrix.node-version}}::$result"
echo "$result"
result=$(pnpm run --silent benchmark ${{ steps.bench-command.outputs.bench }})
processed=$(node ./benchmark/ci-helper.js "$result")
echo "BENCH_RESULT<<BENCHEOF" >> $GITHUB_OUTPUT
echo "### PR Benchmark" >> $GITHUB_OUTPUT
echo "$processed" >> $GITHUB_OUTPUT
echo "BENCHEOF" >> $GITHUB_OUTPUT
shell: bash

# main benchmark
Expand All @@ -70,10 +74,12 @@ jobs:
- name: Run benchmark
id: benchmark-main
run: |
pnpm run --silent benchmark 2> ./bench-result.md
result=$(awk '/requests in/' ./bench-result.md)
echo "::set-output name=BENCH_RESULT${{matrix.node-version}}::$result"
echo "$result"
result=$(pnpm run --silent benchmark ${{ steps.bench-command.outputs.bench }})
processed=$(node ./benchmark/ci-helper.js "$result")
echo "BENCH_RESULT<<BENCHEOF" >> $GITHUB_OUTPUT
echo "### Main Benchmark" >> $GITHUB_OUTPUT
echo "$processed" >> $GITHUB_OUTPUT
echo "BENCHEOF" >> $GITHUB_OUTPUT
shell: bash

output-benchmark:
Expand All @@ -89,12 +95,6 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pr_number: ${{ github.event.issue.number }}
message: |
**Node**: 16
**PR**: ${{ needs.benchmark.outputs.PR-BENCH-16 }}
**MAIN**: ${{ needs.benchmark.outputs.MAIN-BENCH-16 }}
---
${{ needs.benchmark.outputs.PR-BENCH }}
**Node**: 18
**PR**: ${{ needs.benchmark.outputs.PR-BENCH-18 }}
**MAIN**: ${{ needs.benchmark.outputs.MAIN-BENCH-18 }}
${{ needs.benchmark.outputs.MAIN-BENCH }}
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ dist/
_site/
scripts/smoke/*-main/
scripts/memory/project/src/pages/
benchmark/projects/
benchmark/results/
*.log
package-lock.json
.turbo/
Expand Down
5 changes: 5 additions & 0 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# benchmark

Astro's main benchmark suite. It exposes the `astro-benchmark` CLI command. Run `astro-benchmark --help` to see all available commands!

If you'd like to understand how the benchmark works, check out the other READMEs in the subfolders.
7 changes: 7 additions & 0 deletions benchmark/bench/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# bench

This `bench` folder contains different benchmarking files that you can run via `astro-benchmark <bench-file-name>`, e.g. `astro-benchmark memory`. Files that start with an underscore are not benchmarking files.

Benchmarking files will run against a project to measure its performance, and write the results down as JSON in the `results` folder. The `results` folder is gitignored and its result files can be safely deleted if you're not using them.

You can duplicate `_template.js` to start a new benchmark test. All shared utilities are kept in `_util.js`.
12 changes: 12 additions & 0 deletions benchmark/bench/_template.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
/** Default project to run for this benchmark if not specified */
export const defaultProject = 'project-name';

/**
* Run benchmark on `projectDir` and write results to `outputFile`.
* Use `console.log` to report the results too. Logs that start with 10 `=`
* and end with 10 `=` will be extracted by CI to display in the PR comment.
* Usually after the first 10 `=` you'll want to add a title like `#### Test`.
* @param {URL} projectDir
* @param {URL} outputFile
*/
export async function run(projectDir, outputFile) {}
3 changes: 3 additions & 0 deletions benchmark/bench/_util.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
import { createRequire } from 'module';

export const astroBin = createRequire(import.meta.url).resolve('astro');
58 changes: 58 additions & 0 deletions benchmark/bench/memory.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
import fs from 'fs/promises';
import { fileURLToPath } from 'url';
import { execaCommand } from 'execa';
import { markdownTable } from 'markdown-table';
import { astroBin } from './_util.js';

/** @typedef {Record<string, import('../../packages/astro/src/core/config/timer').Stat>} AstroTimerStat */

/** Default project to run for this benchmark if not specified */
export const defaultProject = 'memory-default';

/**
* @param {URL} projectDir
* @param {URL} outputFile
*/
export async function run(projectDir, outputFile) {
const root = fileURLToPath(projectDir);
const outputFilePath = fileURLToPath(outputFile);

console.log('Building and benchmarking...');
await execaCommand(`node --expose-gc --max_old_space_size=256 ${astroBin} build`, {
cwd: root,
stdio: 'inherit',
env: {
ASTRO_TIMER_PATH: outputFilePath,
},
});

console.log('Raw results written to', outputFilePath);

console.log('Result preview:');
console.log('='.repeat(10));
console.log(`#### Memory\n\n`);
console.log(printResult(JSON.parse(await fs.readFile(outputFilePath, 'utf-8'))));
console.log('='.repeat(10));

console.log('Done!');
}

/**
* @param {AstroTimerStat} output
*/
function printResult(output) {
return markdownTable(
[
['', 'Elapsed time (s)', 'Memory used (MB)', 'Final memory (MB)'],
...Object.entries(output).map(([name, stat]) => [
name,
(stat.elapsedTime / 1000).toFixed(2),
(stat.heapUsedChange / 1024 / 1024).toFixed(2),
(stat.heapUsedTotal / 1024 / 1024).toFixed(2),
]),
],
{
align: ['l', 'r', 'r', 'r'],
}
);
}
85 changes: 85 additions & 0 deletions benchmark/bench/server-stress.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import fs from 'fs/promises';
import { fileURLToPath } from 'url';
import autocannon from 'autocannon';
import { execaCommand } from 'execa';
import { waitUntilBusy } from 'port-authority';
import { astroBin } from './_util.js';

const port = 4321;

export const defaultProject = 'server-stress-default';

/**
* @param {URL} projectDir
* @param {URL} outputFile
*/
export async function run(projectDir, outputFile) {
const root = fileURLToPath(projectDir);

console.log('Building...');
await execaCommand(`${astroBin} build`, {
cwd: root,
stdio: 'inherit',
});

console.log('Previewing...');
const previewProcess = execaCommand(`${astroBin} preview --port ${port}`, {
cwd: root,
stdio: 'inherit',
});

console.log('Waiting for server ready...');
await waitUntilBusy(port, { timeout: 5000 });

console.log('Running benchmark...');
const result = await benchmarkCannon();

console.log('Killing server...');
if (!previewProcess.kill('SIGTERM')) {
console.warn('Failed to kill server process id:', previewProcess.pid);
}

console.log('Writing results to', fileURLToPath(outputFile));
await fs.writeFile(outputFile, JSON.stringify(result, null, 2));

console.log('Result preview:');
console.log('='.repeat(10));
console.log(`#### Server stress\n\n`);
let text = autocannon.printResult(result);
// Truncate the logs in CI so that the generated comment from the `!bench` command
// is shortened. Also we only need this information when comparing runs.
// Full log example: https://github.com/mcollina/autocannon#command-line
if (process.env.CI) {
text = text.match(/^.*?requests in.*?read$/m)?.[0];
}
console.log(text);
console.log('='.repeat(10));

console.log('Done!');
}

/**
* @returns {Promise<import('autocannon').Result>}
*/
async function benchmarkCannon() {
return new Promise((resolve, reject) => {
const instance = autocannon(
{
url: `http://localhost:${port}`,
connections: 100,
duration: 30,
pipelining: 10,
},
(err, result) => {
if (err) {
reject(err);
} else {
// @ts-expect-error untyped but documented
instance.stop();
resolve(result);
}
}
);
autocannon.track(instance, { renderResultsTable: false });
});
}
13 changes: 13 additions & 0 deletions benchmark/ci-helper.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
// This script helps extract the benchmark logs that are between the `==========` lines.
// They are a convention defined in the `./bench/_template.js` file, which are used to log
// out with the `!bench` command. See `/.github/workflows/benchmark.yml` to see how it's used.
const benchLogs = process.argv[2];
const resultRegex = /==========(.*?)==========/gs;

let processedLog = '';
let m;
while ((m = resultRegex.exec(benchLogs))) {
processedLog += m[1] + '\n';
}

console.log(processedLog);
79 changes: 79 additions & 0 deletions benchmark/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
import fs from 'fs/promises';
import path from 'path';
import { pathToFileURL } from 'url';
import mri from 'mri';

const args = mri(process.argv.slice(2));

if (args.help || args.h) {
console.log(`\
astro-benchmark <command> [options]
Command
[empty] Run all benchmarks
memory Run build memory and speed test
server-stress Run server stress test
Options
--project <project-name> Project to use for benchmark, see benchmark/make-project/ for available names
--output <output-file> Output file to write results to
`);
process.exit(0);
}

const commandName = args._[0];
const benchmarks = {
memory: () => import('./bench/memory.js'),
'server-stress': () => import('./bench/server-stress.js'),
};

if (commandName && !(commandName in benchmarks)) {
console.error(`Invalid benchmark name: ${commandName}`);
process.exit(1);
}

if (commandName) {
// Run single benchmark
const bench = benchmarks[commandName];
const benchMod = await bench();
const projectDir = await makeProject(args.project || benchMod.defaultProject);
const outputFile = await getOutputFile(commandName);
await benchMod.run(projectDir, outputFile);
} else {
// Run all benchmarks
for (const name in benchmarks) {
const bench = benchmarks[name];
const benchMod = await bench();
const projectDir = await makeProject(args.project || benchMod.defaultProject);
const outputFile = await getOutputFile(name);
await benchMod.run(projectDir, outputFile);
}
}

async function makeProject(name) {
console.log('Making project:', name);
const projectDir = new URL(`./projects/${name}/`, import.meta.url);

const makeProjectMod = await import(`./make-project/${name}.js`);
await makeProjectMod.run(projectDir);

console.log('Finished making project:', name);
return projectDir;
}

/**
* @param {string} benchmarkName
*/
async function getOutputFile(benchmarkName) {
let file;
if (args.output) {
file = pathToFileURL(path.resolve(args.output));
} else {
file = new URL(`./results/${benchmarkName}-bench-${Date.now()}.json`, import.meta.url);
}

// Prepare output file directory
await fs.mkdir(new URL('./', file), { recursive: true });

return file;
}
7 changes: 7 additions & 0 deletions benchmark/make-project/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# make-project

This `make-project` folder contains different files to programmatically create a new Astro project. They are created inside the `projects` folder and are gitignored. These projects are used by benchmarks for testing.

Each benchmark can specify the default project to run in its `defaultProject` export, but it can be overriden if `--project <project-name>` is passed through the CLI.

You can duplicate `_template.js` to start a new project script. All shared utilities are kept in `_util.js`.
6 changes: 6 additions & 0 deletions benchmark/make-project/_template.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
/**
* Create a new project in the `projectDir` directory. Make sure to clean up the
* previous artifacts here before generating files.
* @param {URL} projectDir
*/
export async function run(projectDir) {}
2 changes: 2 additions & 0 deletions benchmark/make-project/_util.js

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit f493794

Please sign in to comment.