-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Additional metrics to enable better autoscaling / load balancing of vLLM servers in Kubernetes #5041
Comments
+1 would be great to have these!!! |
They do look useful to me! Looking forward to the contribution! Also adding @ywang96 for awareness. |
This is great - thank you @achandrasekar! Note that a few metrics in the list (e.g, request_input_length, request_output_length) are already supported by vLLM, so it would be great to consolidate them in your upcoming contribution. I do think we're currently missing a metric related to queue time, which is very important to decide when to scale up inference services. |
@ywang96 couple of these are implemented in a branch. Will triage and help get merged |
@achandrasekar do you mind bring this to otel semantic convention team meeting and discuss there as well? We are working for LLM Semantic Convetion, and this is an area that we do not have now. An related issue in otel semantic convention open-telemetry/semantic-conventions#1079 Here is the meeting info https://docs.google.com/document/d/1EKIeDgBGXQPGehUigIRLwAUpRGa7-1kXB736EaYuJ2M/edit#heading=h.ylazl6464n0c |
@gyliu513 yes, thanks for bringing this up. We discussed this in the last LLM Semantic Conventions meeting. I've created an issue(open-telemetry/semantic-conventions#1102) and an initial PR(open-telemetry/semantic-conventions#1103) to create the server metrics. Let's collaborate there. We can also discuss in the next semconv meeting. |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you! |
A lot of the metrics mentioned here have been added by folks like:
A few more have been added which provide additional insights, but not exactly what we are looking for with respect to autoscaling / load balancing like:
Updated table below on what we still need:
cc @annapendleton who is looking at some of this. |
🚀 The feature, motivation and pitch
vLLM provides some metrics on model performance and load today which are very useful. There are a few metrics that are missing today which if added can make it easier for any orchestrators like Kubernetes to provide better support for autoscaling vLLM servers or distribute load between multiple vLLM servers more efficiently. We have a proposal in the Kubernetes Serving WG to add these additional metrics to popular model servers. We want to add these to vLLM as well.
Google doc link to the proposal which has the set of metrics we want to add and the reasoning behind it - https://docs.google.com/document/d/1SpSp1E6moa4HSrJnS4x3NpLuj88sMXr2tbofKlzTZpk/edit?usp=sharing&resourcekey=0-ob5dR-AJxLQ5SvPlA4rdsg. (Please request access if you are not able to view it)
Listing the metrics that we've identified to include in vLLM:
It would be good to add these metrics both for observability as well as efficient orchestration.
Alternatives
No response
Additional context
cc @WoosukKwon @robertgshaw2-neuralmagic
The text was updated successfully, but these errors were encountered: