-
Notifications
You must be signed in to change notification settings - Fork 206
API Proposal #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API Proposal #5
Changes from 14 commits
3590f78
b5d0c05
1dabf98
6ef1add
620c834
ecc4015
9f24103
8d26e3b
1f3abf2
8418672
3e1a5bf
97131ef
7fc7879
c3bb56b
5ad7e8f
6bacaf1
e6e4360
d385c80
439e7ef
2285b69
649d2c3
063a80d
54d0543
75861c2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,94 @@ | ||
| # Glossary | ||
|
|
||
| This is a glossary that attempts to more thoroughly emplain terms used within the api proposal, in an effort to give context to API decisions. | ||
|
|
||
| <!-- toc --> | ||
| - [API Terms](#api) | ||
| - [BackendPool](#backendpool) | ||
| - [UseCase](#UseCase) | ||
kfswain marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| - [Capacity Constrained Routing](#capacity-constrained-routing) | ||
| - [Priority](#priority) | ||
| - [Fairness](#fairness) | ||
| - [General Routing](#general-routing) | ||
| - [Latency Based Routing](#latency-based-routing) | ||
| - [Lora Affinity](#lora-affinity) | ||
|
|
||
|
|
||
| <!-- /toc --> | ||
|
|
||
| ## API | ||
| This is a very brief description of terms used to describe API objects, included for completeness. | ||
|
|
||
| ### BackendPool | ||
| A grouping of model servers that serve the same set of fine-tunes (LoRA as a primary example). | ||
|
|
||
| Shortened to: `BEP` | ||
kfswain marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ### UseCase | ||
| An LLM workload that is defined and runs on a BackendPool with other use cases. | ||
|
|
||
| # Capacity Constrained Routing | ||
|
|
||
| ## Priority | ||
|
|
||
| ### Summary | ||
| Priority specifies the importance of a UseCase relative to other usecases within a BackendPool. | ||
|
|
||
| ### Description | ||
|
|
||
| For our purposes, priority can be thought of in two classes: | ||
| - Critical | ||
| - Non-Critical | ||
|
|
||
| The primary difference is that non-critical UseCase requests will be rejected in favor of Critical UseCases the face of resource scarcity. | ||
|
|
||
| Example: | ||
|
|
||
| Your current request load is using 80 Arbitrary Compute Units(ACU) of your pools total of 100ACU capacity. 40ACU are critical workload requests, 45 are non-critical. If you were to lose 30 ACU due to an unforseen outage. Priority would dictate that of the 10 surplus ACU to be rejected the entirety of them would be from the non-critical requests. | ||
kfswain marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ## Fairness | ||
|
|
||
| ### Summary | ||
| Fairness specifies how resources are shared among different UseCases, in a way that is most acceptable to the user. | ||
|
|
||
| ### Description | ||
|
|
||
| Fairness, like priority, is only used in resource scarcity events. | ||
|
|
||
| Fairness is utilized when requests of the same priority class need to be rejected, or queued. There are many dimensions that could be considered when considering shared resources. To name a few: | ||
| - KV-cache utilization | ||
| - Total request count | ||
| - SLO adherence | ||
|
|
||
| For the v1 MVP, the only objective a User can specify is the SLO objective they would like to meet. So, in following that pattern, fairness in MVP will simply be considered for SLO adherence. SLO Adherence is only being considered over a rolling time window of data. | ||
|
|
||
| The TTL we are currently assuming is: `5 min` | ||
|
|
||
| ### Example | ||
|
|
||
| **Assumption:** Services have equally weighted fairness for this example. | ||
|
|
||
| - Service A has been meeting its SLO 98% of the requests made in the time window, and Service B has met the SLO 94% of the time. | ||
|
|
||
| - A request for both Service A and Service B come in at the same time, and there is only capacity to start a single new request in the BEP, this capacity would meet the SLO for both services. The other request would be queued (potentially causing that request to not meet SLO). | ||
|
|
||
| - To fairly share these resources. Service B *must* be selected to begin the request immediately as Service A has had its SLO met a larger percentage of the time. | ||
|
|
||
| # General Routing | ||
| Different from the previous definitons, these terms are used to describe methods of routing that are constant, and seek to better utilize compute resources to avoid capacity constraints as much as possible. | ||
|
|
||
| ## Latency Based Routing | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think latency based routing has to do with priority. As far as the concerned useCases here are the one with an Objective (Critical). It's more likely how under the hood we prioritize critical useCases within a BackendPool: which useCase should I route in priority to the best available Backend.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It does, I try to describe Priority up above. This was meant to just describe what Latency based routing means when we reference it, to help explain the SLO field. On the SLO field we mention that priority is implicitly added. LMK if that suffices or if you feel we should go into further detail here. Thanks! |
||
|
|
||
| ### Summary | ||
| Latency Based Routing uses data to ensure UseCases meet their specified SLO. | ||
|
|
||
| ### Description | ||
| Data collected from the model servers and data collected from the request is used to predict the time a request will take on a *specific* model server, and route in a way that will best satisfy the SLO of the incoming requests. | ||
|
|
||
| ## Lora Affinity | ||
|
|
||
| ### Summary | ||
| LoRA Affinity describes the routing strategy displayed in the [demo](https://youtu.be/NUBZg_uqqXk?si=v681EeYdGUGEVqQQ&t=1458), to better utilize Model Servers within the BEP. | ||
|
|
||
| ### Description | ||
| Model Servers that support multi-LoRA handle requests in a FCFS basis. By utilizing the data provided by the model server (the state of loaded LoRA adapters), a routing system can route requests for a given LoRA adapter, to a model server that already has that adapter loaded, to create larger batches than a naive route, which better utilizes the model server hardware. | ||
kfswain marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
Uh oh!
There was an error while loading. Please reload this page.