-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
[bench] Fix benchmark/serve.py to ignore unavailable results #22382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bench] Fix benchmark/serve.py to ignore unavailable results #22382
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request fixes an issue in benchmark/serve.py where saving benchmark results could fail with a KeyError if a metric was not present in the results. The fix correctly adds a check to ensure metrics exist before trying to access them. My review identifies a small area for improvement where a redundant check was added in the extra_info dictionary comprehension. I've suggested removing this redundant check to improve code clarity and maintainability.
vllm/benchmarks/serve.py
Outdated
| for k in results if ( | ||
| k not in metrics and k not in ignored_metrics and k in results) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check k in results is redundant here. The dictionary comprehension already iterates over the keys of the results dictionary with for k in results, so k is guaranteed to be in results. You can simplify this condition to improve code clarity.
for k in results if k not in metrics and k not in ignored_metricsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
…oject#22382) Signed-off-by: Linkun <github@lkchen.net>
…llm-project#22382)" This reverts commit 7723c45.
…oject#22382) Signed-off-by: Linkun <github@lkchen.net> Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net> Signed-off-by: Noam Gat <noamgat@gmail.com>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net> Signed-off-by: Paul Pak <paulpak58@gmail.com>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net> Signed-off-by: Diego-Castan <diego.castan@ibm.com>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net> Signed-off-by: Xiao Yu <xiao.yu@amd.com>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net>
…oject#22382) Signed-off-by: Linkun <github@lkchen.net>
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.Purpose
FIX #22381
Test Plan
Test Result
(Optional) Documentation Update