-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework benchmarks #2507
Rework benchmarks #2507
Conversation
Preliminary results from my local systemSqliteNotable Results:
Raw results
PostgresqlNotable Results:
Raw results
MysqlNotable Results:
Raw results
|
Hey @weiznich! Interesting that you do the benchmarks, I'm happy to see the results. I'm super 100% sure that SQLx/tokio-postgres et.al. will be faster than us just due to our dynamic dispatching in the connectors, and I kind of don't have any time (or need) to start optimizing it yet. My big plan is to take the connectors out from Quaint and let it just act as a query builder for whatever connector the user needs. But there's no time for me to do that now. The thing with SQLite is how it's not really an async database, so what we've been doing is just blocking in an async context. When adding a runtime on top of that, naturally it gets slower. You might want to try to enable One trick I can do for now is to use tokio's P.s. try latest masters in SQLx and see if they perform better. They've been optimizing SQLite performance in the past months. |
c3535ac
to
43bb781
Compare
43bb781
to
5965d3b
Compare
I finally got the time to fix and run the mysql benchmarks. (Rustorm was broken/really slow due to not implementing batch inserts on mysql correctly.) This means there is now a full sets of results in the second posts. @diesel-rs/reviewers If anyone of you want's to learn some performance investigations here are some points that could be worth looking at:
I would probably just run the corresponding benchmark using |
First of all I didn't do those benchmarks to point at anyone to say: Your software is slow, you need to improve it. It did those benchmarks mostly for getting: a) Some set of tests that allow us as diesel team to measure if a certain change has a large impact on performance or not (I've done a few of such potential performance critical changes in the last time and I wished there was something to check against) That said: I was quite surprised to see that for larger result sets
One of my main conclusions from the preliminary results is that there will not be a ashttps://github.com/launchbadge/sqlx/compare/v0.4.0-beta.1...masterync
I've used
I will try to run the sqlite benchmarks with that change again as soon as I find some time. |
851e6e1
to
9b73124
Compare
:diesel run benches |
cc @fafhrd91 who may be interested in this |
@weiznich, thanks. This results are quite informative. I, to be honest, was quite surprised (in a negative way) by the Another observation regarding your benchmark code, is unnecessary copying, implied by |
@blackbeam Thanks for this hint. I've updated the benchmark accordingly. Your changes show definitively improvements for small size result sets, but the slowdown for large queries remains. I would guess that this is somehow related to how data are deserialized. New Mysql Results
|
c9ad679
to
f0448d6
Compare
This commit reworks our benchmark suite: * Use criterion instead of std-lib bencher * Benchmark more different sizes * Add benchmarks for other rust db-crates as comparision * Add a new insert benchmark
f0448d6
to
11a989d
Compare
@weiznich Do you want to push the results to README and put the date and versions? |
@pickfire This is not planed as those benchmarks are written to be a tool for internal comparison not for promoting diesel. The problem there is writing representative benchmarks is really hard and I don't know the other libs that well that it would be really fair. That said those benchmarks can become something like that at a later point in time, but that would require some more tuning (in the best case from the original crate authors). Any contributions here are welcome. Otherwise I thinking about writing a bigger blog post about comparing the different options to connect to relational databases using rust. Such a post would then also contain a commented version of this benchmark results. |
But one thing I think that would be interesting is to compare against drogon https://github.com/an-tao/drogon (c++) as baseline since their database layer seemed fast enough such that it is able to take them to the first place for last techempower benchmark especially on multiple queries. It maybe harder to compare since it is not rust but their |
@pickfire That's not something that's on my personal todo list, because as written above: The current set of benchmarks gives a quite good overview of the state of database connection crates in rust. For me that's a big enough datapoint for now. For me there are more then enough open issues and feature points to work on, as I would have time and motivation to dig into some obscure c++ database for the sake of having the fastest benchmark. I would rather like to build a reliable library to work with different kind of relational databases. |
Create a distinct benchmark suite that tests various different rust database connection crates: Those benchmarks are created with the following goals:
a) To track diesels performance and find potential regressions
b) To evaluate potential alternatives for the currently used C-dependencies
c) To compare diesel with the competing crates
It currently supports the following database systems:
and the following crates:
I've included other crates as well into this setup as there are at least 2 pure rust database connection implementations for postgres and mysql. As an long term goal it can be interesting for diesel to switch away from using the corresponding c based libraries for those database to simplify the setup + replace potential unsafe C code with mostly safe rust code. In my opinion such a replacement should at least full fill the following requirements:
Additionally a comparison with those crates provides in my opinion at least a crude way to estimate how large the overhead introduced by diesel is compared to manually handling everything.
By default only the diesel benchmarks are build and executed. For all other benchmarks you need to specify their corresponding feature flags (depending on the crate, more than one, see the Readme for details)
Open Questions:
@github-thing run-benches
or so to get a diesel benchmark comparison between the current PR and the master branch?)cc @mehcode @pimeys @ivanceras @gwenn @sfackler @blackbeam as the corresponding benchmarks could also be interesting for your crates (If not, sorry for bother you in advance)