This directory contains examples and reference implementations for deploying Large Language Models (LLMs) in various configurations.
- workers: Prefill and decode worker handles actual LLM inference
- router: Handles API requests and routes them to appropriate workers based on specified strategy
- frontend: OpenAI compatible http server handles incoming requests
Single-instance deployment where both prefill and decode are done by the same worker.
Distributed deployment where prefill and decode are done by separate workers that can scale independently.
sequenceDiagram
participant D as VllmWorker
participant Q as PrefillQueue
participant P as PrefillWorker
Note over D: Request is routed to decode
D->>D: Decide if prefill should be done locally or remotely
D->>D: Allocate KV blocks
D->>Q: Put RemotePrefillRequest on the queue
P->>Q: Pull request from the queue
P-->>D: Read cached KVs from Decode
D->>D: Decode other requests
P->>P: Run prefill
P-->>D: Write prefilled KVs into allocated blocks
P->>D: Send completion notification
Note over D: Notification received when prefill is done
D->>D: Schedule decoding
- Choose a deployment architecture based on your requirements
- Configure the components as needed
- Deploy using the provided scripts
Start required services (etcd and NATS) using Docker Compose
docker compose -f deploy/docker-compose.yml up -d
./container/build.sh
./container/run.sh -it
This figure shows an overview of the major components to deploy:
+----------------+
+------| prefill worker |-------+
notify | | | |
finished | +----------------+ | pull
v v
+------+ +-----------+ +------------------+ push +---------------+
| HTTP |----->| processor |----->| decode/monolith |------------>| prefill queue |
| |<-----| |<-----| worker | | |
+------+ +-----------+ +------------------+ +---------------+
| ^ |
query best | | return | publish kv events
worker | | worker_id v
| | +------------------+
| +---------| kv-router |
+------------->| |
+------------------+
Note: For a non-dockerized deployment, first export DYNAMO_HOME
to point to the dynamo repository root, e.g. export DYNAMO_HOME=$(pwd)
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.agg:Frontend -f ./configs/agg.yaml
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.agg_router:Frontend -f ./configs/agg_router.yaml
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.disagg:Frontend -f ./configs/disagg.yaml
cd $DYNAMO_HOME/examples/llm
dynamo serve graphs.disagg_router:Frontend -f ./configs/disagg_router.yaml
In another terminal:
# this test request has around 200 tokens isl
curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden."
}
],
"stream":false,
"max_tokens": 30
}'
See multinode-examples.md for more details.
See close deployment section to learn about how to close the deployment.
These examples can be deployed to a Kubernetes cluster using Dynamo Cloud and the Dynamo deploy CLI.
Before deploying, ensure you have:
- Dynamo CLI installed
- Ubuntu 24.04 as the base image
- Required dependencies:
- Helm package manager
- Dynamo SDK and CLI tools
- Rust packages and toolchain
You must have first followed the instructions in deploy/dynamo/helm/README.md to install Dynamo Cloud on your Kubernetes cluster.
Note: Note the KUBE_NS
variable in the following steps must match the Kubernetes namespace where you installed Dynamo Cloud. You must also expose the dynamo-store
service externally. This will be the endpoint the CLI uses to interface with Dynamo Cloud.
- Login to Dynamo Cloud
export PROJECT_ROOT=$(pwd)
export KUBE_NS=dynamo-cloud # Note: This must match the Kubernetes namespace where you installed Dynamo Cloud
export DYNAMO_CLOUD=https://${KUBE_NS}.dev.aire.nvidia.com # Externally accessible endpoint to the `dynamo-store` service within your Dynamo Cloud installation
dynamo cloud login --api-token TEST-TOKEN --endpoint $DYNAMO_CLOUD
- Build the Dynamo Base Image
Note
For instructions on building and pushing the Dynamo base image, see the Building the Dynamo Base Image section in the main README.
# Set runtime image name
export DYNAMO_IMAGE=<dynamo_docker_image_name>
# Prepare your project for deployment.
cd $PROJECT_ROOT/examples/llm
DYNAMO_TAG=$(dynamo build graphs.agg:Frontend | grep "Successfully built" | awk '{ print $NF }' | sed 's/\.$//')
- Deploy to Kubernetes
echo $DYNAMO_TAG
export DEPLOYMENT_NAME=llm-agg
dynamo deployment create $DYNAMO_TAG --no-wait -n $DEPLOYMENT_NAME -f ./configs/agg.yaml
- Test the deployment
Once you create the Dynamo deployment, a pod prefixed with yatai-dynamonim-image-builder
will begin running. Once it finishes running, pods will be created using the image that was built. Once the pods prefixed with $DEPLOYMENT_NAME
are up and running, you can test out your example!
Find your frontend pod using one of these methods:
# Method 1: List all pods and find the frontend pod manually
kubectl get pods -n ${KUBE_NS} | grep frontend | cat
# Method 2: Use a label selector to find the frontend pod automatically
export FRONTEND_POD=$(kubectl get pods -n ${KUBE_NS} | grep "${DEPLOYMENT_NAME}-frontend" | sort -k1 | tail -n1 | awk '{print $1}')
# Forward the pod's port to localhost
kubectl port-forward pod/$FRONTEND_POD 8000:8000 -n ${KUBE_NS}
# Note: We forward directly to the pod's port 8000 rather than the service port because the frontend component listens on port 8000 internally.
# Test the API endpoint
curl localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden."
}
],
"stream":false,
"max_tokens": 30
}'