-
-
Notifications
You must be signed in to change notification settings - Fork 834
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extended benchmark suite #893
Comments
I don't think writing to the current TTY is ideal, it will depend too much on the performance of your particular terminal emulator. We could create a pseudo-terminal with I think it would be nice if these output types were |
👍
Interesting idea! That would be a great extension of sharkdp/hyperfine#377, which essentially asks for |
Yeah that feature request essentially asks for |
Can I work on this? |
Sure |
Should the benchmarks be a github workflow? |
I'm not sure the GitHub runners are consistent enough for reliable benchmarking |
I wrote a script for benchmarking bfs that might solve these issues: https://github.com/tavianator/bfs/blob/benchmarks/bench/clone-tree.sh. It checks out a specific tag/commit so the dataset should be reproducible. And it uses |
(As discussed in #885,) the current benchmark suite has a few shortcomings.
The most obvious one is that there is no standardized dataset. Past ideas involved using large Git repositories (Linux, Chromium, Rust compiler), but these repositories change over time. We can not simply check out a certain state because the
.git
folder will still grow. It's also painfully slow to clone these large repositories. A better idea might be to create a benchmark folder programmatically with dummy content. This is not a trivial task though, because the folder content should be somewhat realistic. In terms of statistical properties (files per folder, average depth of subtrees, etc.). And ideally it should also reflect the state of a typical home folder that has been used for years (I'm thinking file system fragmentation... without knowing how much of an issue that would be).Second, we have some benchmarks that mainly measure output speed of
fd
. These benchmarks currently write to/dev/null
. They should probably be extended by other benchmarks where we write to another program via a pipe. And maybe to a file.Third, we should add benchmarks that actually write to a TTY. We can do this with
hyperfine
s--show-output
option (this will spoil the output of ourregression.sh
script). Note that this will then very much depend on the terminal emulator speed (for searches with a large amount of results).Fourth, we should maybe move the benchmark scripts into this repository?
The text was updated successfully, but these errors were encountered: