-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Description
🚀 The feature, motivation and pitch
There are huge potential in more advanced load balancing strategies tailored for the unique characteristics of AI inference, compared to basic strategies such as round robin. llm instance gateway is one of such efforts and is demonstrating huge performance wins. vLLM can demonstrate leadership in this space by providing better integration with advanced LBs/gateways.
This doc captures the overall requirements for model servers to better support the llm instance gateway. Luckily vLLM already has lots of features/metrics that enable more efficient load balancing such as exposing the KVCacheUtilization metric.
This is a high level breakdown of the feature requests:
Dynamic LoRA Load/unload
Load/cost reporting in metrics
- Many useful metrics are already available https://docs.vllm.ai/en/latest/serving/metrics.html
- Add LoRA serving metrics (max loras, active loras). Done in [MISC] Add lora requests to metrics #9477
- Add
num_tokens_runningandnum_tokens_waitingmetrics. vLLM already has running and waiting request counts. Exposing token level metrics will further enhance the LB algorithms.
Load/cost reporting in response headers in ORCA format
Open Request Cost Aggregation (ORCA) is a light-weight open protocol for reporting load/cost info to LBs and is already integrated with Envoy and gRPC.
This feature will be controlled by a new engine argument --orca_formats (default [], meaning ORCA is disabled; available values are one or more of[BIN, TEXT, JSON]). If the feature is enabled, vLLM will report metrics defined in the doc as HTTP response headers in the OpenAI compatible APIs.
- Initial ORCA reporting feature integration (add helpers, add engine argument, plumb metrics source to API responses)
- Add required metrics, this can be broken down by each metric
Out of band load/cost reporting API in ORCA format
vLLM will expose a light weight API to report the same metrics in ORCA format. This enables LBs to proactively probe the API and get real time load information. This is a long term vision and more details will be shared later.
cc @simon-mo
Alternatives
No response
Additional context
No response
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.