-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report usage for beam search #6404
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Full CI run is still required to merge this PR so once the PR is ready to go, please make sure to run it. If you need all test signals in between PR commits, you can trigger full CI as well. To run full CI, you can do one of these:
🚀 |
@@ -184,6 +184,9 @@ def __init__( | |||
|
|||
self._verify_args() | |||
if self.use_beam_search: | |||
# Lazy import to avoid circular imports. | |||
from vllm.usage.usage_lib import set_runtime_usage_data | |||
set_runtime_usage_data("use_beam_search", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, we don't track the number of beam search requests or its ratio, but track the cases that the server receives at least one beam search request, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct. I think in the end I want to get to some understanding of "% of vLLM deployments using beam search"
Verified that the data has been received from testing. |
Signed-off-by: Alvant <alvasian@yandex.ru>
So we can be informed when fully removing.