-
Notifications
You must be signed in to change notification settings - Fork 804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrated Benchmarks #4795
Integrated Benchmarks #4795
Conversation
"integrated-benchmarks/http", | ||
"integrated-benchmarks/http-otel", | ||
"integrated-benchmarks/express", | ||
"integrated-benchmarks/express-otel" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pichlermarc do you know why these get removed on npm i
are they meant to be somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these may also need to be added to the npm workspace and and then they should stay even when npm install
ing. Same as below.
}, | ||
{ | ||
"path": "integrated-benchmarks/http" | ||
}, | ||
{ | ||
"path": "integrated-benchmarks/http-otel" | ||
}, | ||
{ | ||
"path": "integrated-benchmarks/express" | ||
}, | ||
{ | ||
"path": "integrated-benchmarks/express-otel" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
nodejs | ||
nodePackages.npm | ||
rustc | ||
cargo | ||
go | ||
crystal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nix Shell needs these runtimes to perform the benchmarks.
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} | ||
- name: Run benchmarks | ||
run: cd ./integrated-benchmarks && nix-shell --quiet --run ./run.cr | ||
- name: Push readme |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open discussion where should the benchmarks go? Is committing them to the readme ok for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Benchmark data is currenty commited to the gh-pages branch at https://github.com/open-telemetry/opentelemetry-js/blob/gh-pages/benchmarks/data.js and then show up at https://opentelemetry.io/docs/languages/js/benchmarks/
@@ -0,0 +1,47 @@ | |||
name: Run Integrated Benchmarks | |||
|
|||
on: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open discussion on when these benchmarks should be invoked.
cd ./integrated-benchmarks && nix-shell --quiet --run ./run.cr | ||
``` | ||
|
||
> Its best to use the pipeline to run the benchmarks, as it will ensure that the benchmarks are run in the same environment as CI. Running locally may give different results and consume all your resources. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The committed initial benchmarks in benchmarks.md
are from running locally. Ideally, we should let the main branch commit the benchmarks and use the main machine as the source of truth.
On merge of this PR the benchmarks file would be updated and should reflect more reliable data.
@@ -0,0 +1,10 @@ | |||
<!-- README.md is generated from README.ecr, do not edit --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto - local machine initial data - let CI commit it in.
Or check pipelines output @pichlermarc could you enable the runners for this PR?
"start": "node ./build/src/index.js", | ||
"compile": "tsc --build", | ||
"clean": "tsc --build --clean", | ||
"lint": "eslint . --ext .ts", | ||
"lint:fix": "eslint . --ext .ts --fix", | ||
"version": "node ../../scripts/version-update.js", | ||
"precompile": "cross-var lerna run version --scope $npm_package_name --include-dependencies", | ||
"prewatch": "npm run precompile", | ||
"peer-api-check": "node ../../scripts/peer-api-check.js", | ||
"align-api-deps": "node ../../scripts/align-api-deps.js" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What of these do we need if we never publish the benchmarks to npm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use ./integrated-benchmarks/README.md
as the source of truth for this PR's supporting documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a great Idea and definitely helpful.
I'm worried about long-term maintainability though with the additional tooling that many contributors are not too familiar with.
var app = express(); | ||
|
||
app.use('/hello', async (req, res) => { | ||
const tracer = opentelemetry.trace.getTracer('hello-tracer'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A performance-minded user would instantiate one tracer and hold on to it to avoid expensive lookup operations on each request.
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} | ||
- name: Run benchmarks | ||
run: cd ./integrated-benchmarks && nix-shell --quiet --run ./run.cr | ||
- name: Push readme |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Benchmark data is currenty commited to the gh-pages branch at https://github.com/open-telemetry/opentelemetry-js/blob/gh-pages/benchmarks/data.js and then show up at https://opentelemetry.io/docs/languages/js/benchmarks/
|
||
jobs: | ||
integrated-benchmarks: | ||
runs-on: ubuntu-latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we do have bare-metal runners for benchmarks, see https://github.com/open-telemetry/opentelemetry-js/blob/main/.github/workflows/benchmark.yml
- uses: cachix/install-nix-action@v24 | ||
with: | ||
nix_path: nixpkgs=channel:nixos-unstable | ||
- uses: actions/cache@v3 | ||
with: | ||
path: | | ||
~/.ivy2/cache | ||
~/.sbt | ||
key: ${{ runner.os }}-sbt-${{ hashFiles('**/build.sbt') }} | ||
- uses: actions/cache@v3 | ||
with: | ||
path: | | ||
~/.cargo/bin/ | ||
~/.cargo/registry/index/ | ||
~/.cargo/registry/cache/ | ||
~/.cargo/git/db/ | ||
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, would it be possible to get away with less tooling? Most of our contributors may not be familiar with these tools. I'm worried about long-term maintainability of this if we don't have enough people familiar with it.
"integrated-benchmarks/http", | ||
"integrated-benchmarks/http-otel", | ||
"integrated-benchmarks/express", | ||
"integrated-benchmarks/express-otel" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these may also need to be added to the npm workspace and and then they should stay even when npm install
ing. Same as below.
"peerDependencies": { | ||
"@opentelemetry/api": ">=1.0.0 <1.10.0" | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this does not implement the API we don't need the peer-dep.
"access": "restricted" | ||
}, | ||
"devDependencies": { | ||
"@opentelemetry/api": ">=1.0.0 <1.10.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, does not implement the API so we can directly depend on the latest version.
"@opentelemetry/api": ">=1.0.0 <1.10.0", | |
"@opentelemetry/api": "1.9.0", |
"express": "4.19.2", | ||
"cross-var": "1.1.0", | ||
"lerna": "6.6.2", | ||
"ts-mocha": "10.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think mocha is unused in this package.
This PR is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
This PR was closed because it has been stale for 14 days with no activity. |
This PR adds benchmarks with and without otel usage inside common node.js runtimes like HTTP and Express.
Related:
What is added?
I have added a dir in the monorepo
./integrated-benchmarks
. They contain a fork of otel-js-server-benchmarks, they use a combination of Crystal, Nix, bombardier to spawn Node.js servers with and without basic OpenTelemetry usage.For each commit the benchmarks will run in CI, on pull requests the results are outputted to the console, and on main they are committed into the
benchmarks.md
file.Anatomy of a benchmark
Each benchmark is a node.js web server on port 8000. Each server has a single endpoint
/hello
that will return a simple JSON response of{ message: "Hello World" }
. The benchmarks contain versions of this endpoint with and without OpenTelemetry span creation.Comparison
To see the impact of OpenTelemetry JS in the tested runtimes, I have added "base case" benchmarks annotated without the
-otel
abbreviation.For example the
http
benchmark:Now to see the impact of adding OpenTelemetry to the benchmarks I added basic usage in the annotated
-otel
benchmarks such as thehttp-otel
:Why was it added?
Based on the conversation discussed in the related issue:
The current benchmarks: https://open-telemetry.github.io/opentelemetry-js/benchmarks/ don't give a good reflection of how processing, exporting, and creating spans in a web server affect the performance.
Not only this, but the benchmarks that we already created show that even after doing all the necessary things such as adding the
BatchProcessor
, OpenTelemetry JS still adds significant overhead to a Node.js application.Given all the reasons in the linked issue, and mentioned here, I have added these benchmarks into the source code of OpenTelemetry as they provide a transparent insight into performance plus allow the community to iterate on performance improvements.