Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark rw test performance #4356

Closed
thedavidprice opened this issue Feb 3, 2022 · 3 comments · Fixed by #4504
Closed

Benchmark rw test performance #4356

thedavidprice opened this issue Feb 3, 2022 · 3 comments · Fixed by #4504

Comments

@thedavidprice
Copy link
Contributor

thedavidprice commented Feb 3, 2022

related to #4208 and #4096 and #4360

rw test performance has been improved by reducing overal mem usage. Athough the rw test web leak appears resolved and rw test api mem usage has been cut in have, there is still a potential for performance issues with the api tests in the future. (Jest is notorious for mem leaks.)

We should establish a benchmark to monitor Jest test performance. Ideally we would be able to have analytics related to jest --logHeapUsage, but this is not a default to consider because it adds overhead and time to running tests. There is some correlation between duration and performance, especially if we could know the number of tests being run. So an ideal case would mean we have data regarding:

  • during of test run
  • number of tests run

An ideal solution might include:

  1. adding a helper to the Structure package model for test files
  2. using Jest configuration reporters (see this example)

Both of those requirements needs exploration and time. For the near term, we could try this approach:

  • focus on benchmarking via our CI, which uses a known project structure (and quantity of tests)
    • we might need to add tests to have a minimum amount to benchmark
  • only get test duration data for the case of yarn rw test --no-watch (which is used during CI)
    • we can use the same method currently employed for build duration
@thedavidprice
Copy link
Contributor Author

@cannikin Thoughts in general about this?

I've also added to discussion list for next team meeting.

@cannikin
Copy link
Member

cannikin commented Feb 3, 2022

We could look for the --no-watch flag and only time it in that case. If you look at the build script you can see how I just surround the whole run with timedTelemetry() and then everything fires off automatically, it's pretty nice! If there's a big enough error that all of jest blows up, it should get sent with errorTelemetry() in the catch right after that.

Is there somewhere to hook into the Jest runner when it's in watch mode, and run some custom code at the end of a run? We could maybe fire off individual telemetry events in there. But, if you only run one test it'll be way faster than if you run the whole suite, and that would massively throw off the numbers.

Likewise if we can hook in there maybe we can get the number of tests that are being run. We could just count them ourselves, but you could have passed some options to the test command to only run some of the tests, right? In which case the number of tests to time comparison is going to be wildly wrong.

@thedavidprice
Copy link
Contributor Author

We could look for the --no-watch flag and only time it in that case. If you look at the build script you can see how I just surround the whole run with timedTelemetry() and then everything fires off automatically, it's pretty nice! If there's a big enough error that all of jest blows up, it should get sent with errorTelemetry() in the catch right after that.

I think this is a great next step.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants