-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmark jenkins job? #8157
Comments
/cc @nodejs/benchmarking @nodejs/future-ci |
@AndreasMadsen we have a dedicated bare-bones machine to run benchmarks. These are published here: https://benchmarking.nodejs.org The benchmark group could probably look into adding more tests. |
I'm looking into a way of collecting regular results from the benchmarks directory - due to the sheer volume of data, we need to find a way of summarizing the scores, whilst still exposing the individual numbers. The other problem to overcome, is to decide how we would run the benchmarks. We need to have a way of changing only one thing (we're interested in performance of Node across releases, so this is what we should change), all other components (the machine, the tests run etc) should stay the same. The benchmarks directory is under constant change, so we need a way of ensuring that we either stick at a particular (back-level) version of benchmarks, and reuse those, or to run latest benchmarks and include an older version of node to compare to. This could cause us issues where there's tests that we're changing based on new functionality that is not backwards compatibility. Some comments on this would be good, so we can come to a decision and get the benchmarks running. |
@jbergstroem @gareth-ellis I don't want continues observations of performances (though that is important), I want the ability to say "please compare master and this PR using this benchmark". Changes in benchmarks shouldn't be an issue, we could just consistently use that in the master. Essentially I want it to run: git checkout master
# build master
./configure
make
mv ./node ./node-master
# build pr
apply-pr $PR_ID
./configure
make
mv ./node ./node-pr
# run benchmark
./node benchmark/compare.js --old ./node-master --new ./node-pr --filter $JOB_FILTER --runs $JOB_RUNS -- $JOB_CATEGORY | tee output.csv
cat output.csv | Rscript benchmark/compare.R edit: updated bash job to be syntactically correct |
@AndreasMadsen I was just saying that the benchmark group is coordinating access/time spent on our only dedicated hardware box which is most suitable for reliable results. I guess I should've been more clear about what I meant with "adding tests", as in, exposing a job or similar that we could run from Jenkins (albeit with limited access). |
@AndreasMadsen the other consideration is that if we run on the regular machines, tying up one of the machines for 4+ hours could affect the progress off the regular builds which are much faster. As @gareth-ellis mentioned consolidating the data down to something consumable is the challenge for regular runs. When you say you want to run a particular benchmark for a particular PR. Is there a key subset of the benchmarks that you find yourself wanting to run as opposed to all of them ? It might be easier to tackle being able to run those first. |
I understand, but it is the same reason why we don't want to do it on our own machines.
In this case we have the
Yes, it is a subset. But the subset depends on the PR, thus I think this should be configurable parameters in the job. Please see the bash job I outlined for how this is typically accomplished. |
@AndreasMadsen , do you need to have ./node in the Rscript line?
When I run with that, I get However removing ./node seems to generate some numbers. I've prototyped something in my fork of benchmarking: I'll open a PR shortly to get some comments. For a ci run - we'd need a job where we can supply: mandatory BRANCH We will need to be careful if we do implement this, that the machine is still able to continue the regular regression runs - though expanding them to include at least a subset of the benchmarks from http://github.com/nodejs/node would be beneficial |
No sorry about that. It is just |
@gareth-ellis also the default |
@nodejs/build @nodejs/benchmarking Is this something that might realistically happen? Is there consensus that it would be desirable? Any clear pre-requisites (like more Ci resources)? Is there a path forward on this? |
@mcollina gave us an update in nodejs/benchmarking#58 a while ago, see nodejs/benchmarking#58 (comment)
|
So, the server is available and wired up to the net. It is a matter of attaching it to the rest of the infrastructure and starting making a good use of it. |
I would actually consider one of the linuxONE machines, there are several available, and they are super fast. I'm also assuming they could be configured to have some guarantee of available resources. |
Are those dedicated machines?
Building node.js takes almost no time compared to the running the benchmarks. |
This is the issue, we need a machine that is only used for benchmarks (AIUI). We have 3 linuxone machines (2 test, one release), and they're all in pretty heavy use. I suspect that currently the release machines are mostly idle, other than an hour or two building nightlies, but I'm not sure it'd be feasible to use them. If we could run this on a normal CI test machine, then that would be a lot easier to work with. Our faster machines (like the linuxone boxes) are frequently idle, and we could probably survive for an hour or two with just one (similar to when people run the stress-test job). |
We have a machine ready to hook up for this cc @jasnell. |
You mean there's a machine in CI already, or that you have one you want to donate? Either way you should raise an issue in the build repo, and we'll have to sort out ansible configs for it. |
I think this is going to be a longer process. In an ideal world we would have infinite time across all platforms, in reality we don't have that. We should start with the hardware that is being donated, then we can move forward and evaluate the benefit/possibility of adding additional platforms for benchmarking. Once the machine is in the CI, we can do some exploratory runs to evaluate the results we get, and move forward from there. |
But if they can run the benchmark suite in short time (I'd assume < 20m) IMHO it's still a good option. We could mitigate resource fluctuations by running the comparison in a checkerboard pattern. |
Not convinced that running the entire suite is an option, I see the running of benchmarks being targeted subset for a PR, or perhaps as an overall summary - this would have to be a reduced amount, last time I tried to run a compare on the whole suite, I kicked it off on Friday, then I had to abandon the run on Monday as it was still running. (this wasnt on a zLinux / LinuxOne machine, however it was a fairly powerful machine. LinuxOne comes with clockspeeds of either 4.3Ghz or 5Ghz, so that is one reason that we see the tests and builds run quicke - even though you can't really compare 5Ghz LinuxOne with 5Ghz on Intel / power etc, they're different architecture so may achieve more or less in one clock cycle). See nodejs/benchmarking#58 (comment) for my summary. However, as I said, running the whole suite should not really be how we do things, targeted testing would be key. |
The entire suite would take days. Statistically, it is also completely useless to run the entire suite because the risk of false positives is too high. |
This seems to be resolved now that we have a benchmark job in the CI. If any additional functionality or whatever is needed then a new issue should be opened. This one has become really stale. |
A problem with our current benchmark system is that it takes a long time to compare two node versions. While many of the benchmarks likely use too many iterations (and thus time), that is not the case for all of them. See for example #8139,
http/simple.js
takes 4 hours per node version and it appears that is necessary in this case.I personally don't own an extra computer to run benchmarks on, thus I think it would be nice if we could create a jenkins job on
ci.nodejs.org
that would compare two node version on a benchmark subset likehttp/simple.js
.I don't know how the CI setup works, but it would have to be a machine (virtual or physical) that has a constant performance. Otherwise we would get false positive results like we have seen here: #7425 (comment)
I don't think we need a machine for every platform, just those where we prioritize performance (windows and linux?).
The text was updated successfully, but these errors were encountered: