Skip to content

Option to re-display a benchmark file #185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 20 commits into
base: main
Choose a base branch
from

Conversation

jaredoconnell
Copy link
Collaborator

closes #175

This adds a command to re-display a prior benchmarks file in the CLI.

Before marking this as ready for review, we need to decide what command format we want to use. During the call with Mark we discussed this being an option within the benchmark command.
Also, let me know if the stripped down results file is a good one to use. I manually removed data from a large results file.

@jaredoconnell jaredoconnell marked this pull request as ready for review June 16, 2025 17:04

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15786325345/artifacts/3372583658.
They will be retained for up to 30 days.

type=click.Path(),
default=Path.cwd() / "benchmarks.json",
)
def display(path):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

display seems fine to me, but @markurtz may have a better keyword in mind.

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15829107664/artifacts/3384635667.
They will be retained for up to 30 days.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds a “display” subcommand to reload and reprint an existing benchmark report file in the CLI.

  • Introduces print_full_report on the console to batch‐print metadata, info, and stats.
  • Adds display_benchmarks_report entrypoint and wires it into __main__.py as the display command.
  • Includes unit tests and updates exports and pre-commit settings.

Reviewed Changes

Copilot reviewed 7 out of 11 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/guidellm/benchmark/output.py Added print_full_report to consolidate full report output
src/guidellm/benchmark/entrypoints.py Defined display_benchmarks_report to load and show a file
src/guidellm/benchmark/init.py Exported new display_benchmarks_report
src/guidellm/main.py Added display CLI command
tests/unit/entrypoints/test_display_entrypoint.py New tests for JSON/YAML report display
.pre-commit-config.yaml Excluded assets directory from formatting hooks
Comments suppressed due to low confidence (4)

src/guidellm/main.py:289

  • [nitpick] The command name display is very generic. Consider renaming it to something like benchmark-display or benchmarks:display to avoid collisions and improve discoverability.
@cli.command(help="Redisplay a saved benchmark report.")

.pre-commit-config.yaml:6

  • The regex ^tests/?.*/assets/.+ may not correctly match nested tests/unit/assets paths. Consider using ^tests/.+/assets/.+ to reliably exclude asset files.
    exclude: ^tests/?.*/assets/.+

src/guidellm/main.py:293

  • The CLI uses Path here but there is no from pathlib import Path import in this file. Please add from pathlib import Path at the top.
    default=Path.cwd() / "benchmarks.json",

src/guidellm/benchmark/entrypoints.py:138

  • This function annotates file: Path but Path is not imported. Please add from pathlib import Path.
def display_benchmarks_report(file: Path):

Copy link
Member

@markurtz markurtz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @jaredoconnell; overall, the code looks good. I want to expand the functionality to encompass not only displaying the results but also reexporting them to a new file if the user desires.

Given that, I'd recommend something along the lines of the following for the CLI:
guidellm benchmark report PATH --output OUTPUT where output is optional. If an output file is supplied, we will resave the report to that file path using the extension as the file type. It could also potentially be named export, convert, or anything along those lines.

For the benchmark CLI pathways, it would then look like the following:
guidellm benchmark run ...
guidellm benchmark report ...

And if that ACTION after the benchmark is not supplied, we will default to filling it in as run. This way, we namespace all of the commands under benchmark and add flexibility towards the future.

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15913354854/artifacts/3414683301.
They will be retained for up to 30 days.

@jaredoconnell
Copy link
Collaborator Author

This is ready for re-review.

Since the last reviews, there is an option to re-export the benchmarks, and the command format was changed.

The default command feature isn't supported by Click so I am using a class to handle the new behavior. I included some external code for the default command as opposed to doing that myself because I found that there were a lot of edge cases that broke the functionality.

Should I document this command as Step 6 in the quick start?

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15913513325/artifacts/3414739016.
They will be retained for up to 30 days.

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15935011408/artifacts/3422105673.
They will be retained for up to 30 days.

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15935670062/artifacts/3422335403.
They will be retained for up to 30 days.

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15935873594/artifacts/3422412684.
They will be retained for up to 30 days.

📦 Build Artifacts Available
The build artifacts (.whl and .tar.gz) have been successfully generated and are available for download: https://github.com/neuralmagic/guidellm/actions/runs/15935873594/artifacts/3422457310.
They will be retained for up to 30 days.

@jaredoconnell
Copy link
Collaborator Author

The CI errors appears to be from using an older commit's code. That's very odd.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Re-load and re-display a saved benchmark.json report
3 participants