-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add some benchmarks #66
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. How reproducible are the results using the randomisation? I'd prefer to use the same input every time we run a benchmark, but we can use random-ish input if there is not too much noise in the results.
I'd also prefer not to benchmark the roundtrip to the server, and just benchmark the client side of things, but I don't think we can do that yet (or not easily). No harm in having both too. |
@nrc Yes! non-integration benchmarks are coming, but I wanted to do some full round-trip tests now. :) The benchmarks are fairly normal. I think you're right that the same sample each run would give us more predictable performance, but we can't really say that performance each run in these kinds of tests will be consistent anyways (there is way too much going on for a slight change in buffer size/values to matter much, and these aren't necessarily clean-slate or configuration-enforced tests, tikv can be configured however). This also keeps us honest and prevents us from optimizing on specific values. That said, I have no strong opinions. |
Maybe I found an ICE? |
I can repro with today's nightly, but not with my previous one, so it's a new bug in the last week or so. |
Looks like rust-lang/rust#61442. It should be fixed by rust-lang/rust#61572 so I think we are better to wait a few days for that to land, rather than work around it. |
I think it would be useful to add to the readme how to run the benchmarks. Here's a skeleton blurb how I might structure it: Running benchmarksSome of the benchmarks require a connection to a PD and TikV instance. To run them, the addresses of the PD instances must be passed in as environment variable PD_ADDRS=127.0.0.1:2379 cargo +nightly bench where It is possible to specify running specific benchmarks, for example PD_ADDRS=127.0.0.1:2379 cargo +nightly bench --features integration-tests |
@jlerche Great catch, added that! |
Signed-off-by: Ana Hobden <operator@hoverbear.org>
I'll base this on #68 since it's a better strategy. |
b01472e
to
33c6964
Compare
Signed-off-by: Ana Hobden <operator@hoverbear.org>
Signed-off-by: Ana Hobden <operator@hoverbear.org>
Signed-off-by: Ana Hobden <operator@hoverbear.org>
Signed-off-by: Ana Hobden <operator@hoverbear.org>
ping @Hoverbear are you still planning to rebase this PR? |
@nrc Yes I'd like to rebase this but I need to get fixtures working first! I think the best way is to use a pre-seeded random this way we don't have to ship data. |
Nick and I discussed this and we'd like to try another method. |
This PR adds benchmarks that:
I'll be adding proptest related tests and more benchmarks as we go, but this is a good scaffold.
Note: I also had to renormalize some line endings.
Some sample results: