-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Formalize / automate release performance stability testing #245
Comments
mum4k
pushed a commit
that referenced
this issue
Jun 26, 2020
This PR has changes that make it easy to consume our integration testing as a benchmarking tool framework for external repositories, allowing those to write custom tests for running in CI. This probably needs to be split up into multiple PR's, but this allows reviewers to have an early peek at what such PR's would lead up to. The `README.md` in here has more context, but the primary goals are te be able to: - run the tests against arbitrary Envoy revisions - persist profile dumps, flamegraphs, and latency numbers per test - run the nighthawk tools via docker - offer stock tests, but also allow scaffolding consumer-specific tests Example of a fully dockerized flow (more example in `README.md`): ``` # This script runs the dockerized benchmarking framework, which in # turn will pull Nighthawk and Envoy in via docker. set -eo pipefail set +x set -u # The benchmark logs and artifacts will be dropped here OUTDIR="/home/oschaaf/code/envoy-perf-vscode/nighthawk/benchmarks/tmp/" # Used to map the test that we want to see executed into the docker container TEST_DIR="/home/oschaaf/code/envoy-perf-vscode/nighthawk/benchmarks/test/" # Rebuild the docker in case something changed. ./docker_build.sh && docker run -it --rm \ -v "/var/run/docker.sock:/var/run/docker.sock:rw" \ -v "${OUTDIR}:${OUTDIR}:rw" \ -v "${TEST_DIR}:/usr/local/bin/benchmarks/benchmarks.runfiles/nighthawk/benchmarks/external_tests/" \ --network=host \ --env NH_DOCKER_IMAGE="envoyproxy/nighthawk-dev:latest" \ --env ENVOY_DOCKER_IMAGE_TO_TEST="envoyproxy/envoy-dev:f61b096f6a2dd3a9c74b9a9369a6ea398dbe1f0f" \ --env TMPDIR="${OUTDIR}" \ oschaaf/benchmark-dev:latest ./benchmarks --log-cli-level=info -vvvv ``` Part of #245
wjuan-AFK
pushed a commit
to wjuan-AFK/nighthawk
that referenced
this issue
Jul 14, 2020
This PR has changes that make it easy to consume our integration testing as a benchmarking tool framework for external repositories, allowing those to write custom tests for running in CI. This probably needs to be split up into multiple PR's, but this allows reviewers to have an early peek at what such PR's would lead up to. The `README.md` in here has more context, but the primary goals are te be able to: - run the tests against arbitrary Envoy revisions - persist profile dumps, flamegraphs, and latency numbers per test - run the nighthawk tools via docker - offer stock tests, but also allow scaffolding consumer-specific tests Example of a fully dockerized flow (more example in `README.md`): ``` set -eo pipefail set +x set -u OUTDIR="/home/oschaaf/code/envoy-perf-vscode/nighthawk/benchmarks/tmp/" TEST_DIR="/home/oschaaf/code/envoy-perf-vscode/nighthawk/benchmarks/test/" ./docker_build.sh && docker run -it --rm \ -v "/var/run/docker.sock:/var/run/docker.sock:rw" \ -v "${OUTDIR}:${OUTDIR}:rw" \ -v "${TEST_DIR}:/usr/local/bin/benchmarks/benchmarks.runfiles/nighthawk/benchmarks/external_tests/" \ --network=host \ --env NH_DOCKER_IMAGE="envoyproxy/nighthawk-dev:latest" \ --env ENVOY_DOCKER_IMAGE_TO_TEST="envoyproxy/envoy-dev:f61b096f6a2dd3a9c74b9a9369a6ea398dbe1f0f" \ --env TMPDIR="${OUTDIR}" \ oschaaf/benchmark-dev:latest ./benchmarks --log-cli-level=info -vvvv ``` Part of envoyproxy#245
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In #232 I added a code-level todo regarding on of the release steps, which is meant to
ensure that we don't introduce major performance and/or accuracy changes between releases
Filing this issue to discuss and track.
The text was updated successfully, but these errors were encountered: