-
-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmark for Docutils #216
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good choice for benchmark -- real-world workload over a non-trivial codebase.
In the interest of repository size, would it be possible to remove the images? I don't think docutils does much with them, other than linking to them (though correct me if I'm mistaken), so maybe we only need to include one blank image and adjust all of the links to point to that.
pyperformance/data-files/benchmarks/bm_docutils/run_benchmark.py
Outdated
Show resolved
Hide resolved
I blanked every image file, so the files are still there but empty. I also removed every active I moved the I/O to be outwith the timing code, I couldn't think of anything better. A |
I have no clue what is causing CI to fail, when I ran the A |
I think the clue might be in here:
In some case during the test run, the new benchmark function is returning a time that is zero. You could try to reproduce this locally by running:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing my concerns. I'm approving this pending CI passing.
pyperformance/data-files/benchmarks/bm_docutils/run_benchmark.py
Outdated
Show resolved
Hide resolved
Co-authored-by: Michael Droettboom <mdboom@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@ericsnowcurrently / @gvanrossum you've both committed recently, if you've any time for a review of this PR I'd appreciate it! Thanks A |
Can I bow out? Eric and/or Mike will be able to review this. |
elapsed = 0 | ||
for file in doc_root.rglob("*.txt"): | ||
file_contents = file.read_text(encoding="utf-8") | ||
t0 = time.perf_counter_ns() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t0 = time.perf_counter_ns() | |
t0 = pyperf.perf_counter() |
"output_encoding": "unicode", | ||
"report_level": 5, | ||
}) | ||
elapsed += time.perf_counter_ns() - t0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
elapsed += time.perf_counter_ns() - t0 | |
elapsed += pyperf.perf_counter() - t0 |
pyperformance/data-files/benchmarks/bm_docutils/run_benchmark.py
Outdated
Show resolved
Hide resolved
@AA-Turner: It would be great to have this. Any chance you have time to address @kumaraditya303's concerns? |
@mdboom / @kumaraditya303 sorry for the delay here, please may you re-review? A |
This adds a benchmark of Docutils as an application. I thought a reasonable test load was Docutils' own docs (takes ~4.5-5s on my computer).
I haven't submitted a benchmark before---I don't know the best way of storing the input data, so for speed I copied the documentation into git here (the docs are public domain).
A