Skip to content

Commit 1e2d723

Browse files
Merge branch 'main' into aks_demo
2 parents 0a20ef2 + 3c500ae commit 1e2d723

File tree

26 files changed

+404
-2560
lines changed

26 files changed

+404
-2560
lines changed

README.md

Lines changed: 152 additions & 80 deletions
Original file line numberDiff line numberDiff line change
@@ -55,96 +55,68 @@ Built in Rust for performance and in Python for extensibility, Dynamo is fully o
5555
The following examples require a few system level packages.
5656
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)
5757

58-
```
59-
apt-get update
60-
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
61-
python3 -m venv venv
62-
source venv/bin/activate
63-
64-
pip install "ai-dynamo[all]"
65-
```
66-
> [!NOTE]
67-
> To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.
68-
69-
### Building the Dynamo Base Image
58+
1. Install etcd and nats
7059

71-
Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to use a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
72-
- Docker Hub (docker.io)
73-
- NVIDIA NGC Container Registry (nvcr.io)
74-
- Any private registry
60+
To co-ordinate across the data center Dynamo relies on an etcd and nats cluster. To run locally these need to be available.
7561

76-
We publish our images in [nvcr.io](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/vllm-runtime) and you can use them.
77-
Alternatively you could build and push an image from source:
62+
- [etcd](https://etcd.io/) can be run directly as `./etcd`.
63+
- [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`.
7864

79-
```bash
80-
./container/build.sh
81-
docker tag dynamo:latest-vllm <your-registry>/dynamo-base:latest-vllm
82-
docker login <your-registry>
83-
docker push <your-registry>/dynamo-base:latest-vllm
65+
The Dynamo team recommend the `uv` Python package manager, although anyway works. Install uv:
66+
```
67+
curl -LsSf https://astral.sh/uv/install.sh | sh
8468
```
8569

86-
Notes about builds for specific frameworks:
87-
- For specific details on the `--framework vllm` build [read about the VLLM backend](components/backends/vllm/README.md)
88-
.
89-
- For specific details on the `--framework tensorrtllm` build, see [Read about the TensorRT-LLM backend](components/backends/trtllm/README.md)
90-
.
70+
2. Select an engine
9171

92-
Note about AWS environments:
93-
- If deploying Dynamo in AWS, make sure to build the container with EFA support using the `--make-efa` flag.
72+
We publish Python wheels specialized for each of our supported engines: vllm, sglang, llama.cpp and trtllm. The examples that follow use sglang, read on for other engines.
9473

95-
After building, you can use this image by setting the `DYNAMO_IMAGE` environment variable to point to your built image:
96-
```bash
97-
export DYNAMO_IMAGE=<your-registry>/dynamo-base:latest-vllm
9874
```
75+
uv venv venv
76+
source venv/bin/activate
77+
uv pip install pip
9978
100-
> [!NOTE]
101-
> We are working on leaner base images that can be built using the targets in the top-level Earthfile.
79+
# Choose one
80+
uv pip install "ai-dynamo[sglang]"
81+
uv pip install "ai-dynamo[vllm]"
82+
uv pip install "ai-dynamo[llama_cpp]" # CPU, see later for GPU
83+
```
10284

10385
### Running and Interacting with an LLM Locally
10486

10587
You can run a model and interact with it locally using commands below.
106-
We support several backends including: `mistralrs`, `sglang`, `vllm`, and `tensorrtllm`.
10788

10889
#### Example Commands
10990

11091
```
111-
python -m dynamo.frontend [--http-port 8080]
112-
python -m dynamo.vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
92+
python -m dynamo.frontend --interactive
93+
python -m dynamo.sglang.worker Qwen/Qwen3-4B
11394
```
11495

11596
```
116-
? User › Hello, how are you?
11797
✔ User · Hello, how are you?
11898
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
11999
```
120100

121-
### LLM Serving
101+
If the model is not available locally it will be downloaded from HuggingFace and cached.
122102

123-
Dynamo provides a simple way to spin up a local set of inference
124-
components including:
103+
You can also pass a local path: `python -m dynamo.sglang.worker --model-path ~/llms/Qwen3-0.6B`
104+
105+
### Running an LLM API server
106+
107+
Dynamo provides a simple way to spin up a local set of inference components including:
125108

126109
- **OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
127110
- **Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
128111
- **Workers** – Set of pre-configured LLM serving engines.
129112

130-
To run a minimal configuration you can use a pre-configured
131-
example.
132-
133-
#### Start Dynamo Distributed Runtime Services
134-
135-
First start the Dynamo Distributed Runtime services:
136-
137-
```bash
138-
docker compose -f deploy/metrics/docker-compose.yml up -d
139113
```
140-
#### Start Dynamo LLM Serving Components
141-
142-
Next serve a minimal configuration with an http server, basic
143-
round-robin router, and a single worker.
114+
# Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router:
115+
python -m dynamo.frontend [--http-port 8080]
144116
145-
```bash
146-
cd examples/llm
147-
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
117+
# Start the vllm engine, connecting to nats and etcd to receive requests. You can run several of these,
118+
# both for the same model and for multiple models. The frontend node will discover them.
119+
python -m dynamo.sglang.worker deepseek-ai/DeepSeek-R1-Distill-Llama-8B
148120
```
149121

150122
#### Send a Request
@@ -163,43 +135,143 @@ curl localhost:8000/v1/chat/completions -H "Content-Type: application/json"
163135
}' | jq
164136
```
165137

138+
Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them.
139+
140+
### Engines
141+
142+
In the introduction we installed the `sglang` engine. There are other options.
143+
144+
All of these requires nats and etcd, as well as a frontend (`python -m dynamo.frontend [--interactive]`).
145+
146+
# vllm
147+
148+
```
149+
uv pip install ai-dynamo[vllm]
150+
```
151+
152+
Run the backend/worker like this:
153+
```
154+
python -m dynamo.vllm --help
155+
```
156+
157+
vllm attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.
158+
159+
To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.
160+
161+
# sglang
162+
163+
```
164+
uv pip install ai-dynamo[sglang]
165+
```
166+
167+
Run the backend/worker like this:
168+
```
169+
python -m dynamo.sglang.worker --help
170+
```
171+
172+
You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/backend/server_arguments.html . See there to use multiple GPUs.
173+
174+
# TRT-LLM
175+
176+
This currently requires a container TODO ADD THE DOCS PLZ THANK YOU
177+
178+
# llama.cpp
179+
180+
To install llama.cpp for CPU inference:
181+
```
182+
uv pip install ai-dynamo[llama_cpp]
183+
```
184+
185+
To build llama.cpp for CUDA:
186+
```
187+
pip install llama-cpp-python -C cmake.args="-DGGML_CUDA=on"
188+
uv pip install uvloop ai-dynamo
189+
```
190+
191+
At time of writing the `uv pip` version does not support that syntax, so use `pip` directly inside the venv.
192+
193+
To build llama.cpp for other accelerators see https://pypi.org/project/llama-cpp-python/ .
194+
195+
Download a GGUF and run the engine like this:
196+
```
197+
python -m dynamo.llama_cpp --model-path ~/llms/Qwen3-0.6B-Q8_0.gguf
198+
```
199+
200+
If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to dynamo-run to enable it.
201+
166202
### Local Development
167203

168-
If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.
204+
1. Install libraries
205+
206+
**Ubuntu:**
207+
```
208+
sudo apt install -y build-essential libhwloc-dev libudev-dev pkg-config libclang-dev protobuf-compiler python3-dev cmake
209+
```
169210

170-
Otherwise, to develop locally, we recommend working inside of the container
211+
**macOS:**
212+
- [Homebrew](https://brew.sh/)
213+
```
214+
# if brew is not installed on your system, install it
215+
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
216+
```
217+
- [Xcode](https://developer.apple.com/xcode/)
171218

172-
```bash
173-
./container/build.sh
174-
./container/run.sh -it --mount-workspace
219+
```
220+
brew install cmake protobuf
175221
176-
cargo build --release
177-
mkdir -p /workspace/deploy/sdk/src/dynamo/sdk/cli/bin
178-
cp /workspace/target/release/dynamo-run /workspace/deploy/sdk/src/dynamo/sdk/cli/bin
222+
## Check that Metal is accessible
223+
xcrun -sdk macosx metal
224+
```
225+
If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly.
226+
227+
228+
2. Install Rust
179229

180-
uv pip install -e .
181-
export PYTHONPATH=$PYTHONPATH:/workspace/deploy/sdk/src:/workspace/components/planner/src
230+
```
231+
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
232+
source $HOME/.cargo/env
182233
```
183234

235+
3. Create a Python virtual env:
184236

185-
#### Conda Environment
237+
```
238+
uv venv dynamo
239+
source dynamo/bin/activate
240+
```
186241

187-
Alternately, you can use a conda environment
242+
4. Install build tools
188243

189-
```bash
190-
conda activate <ENV_NAME>
244+
```
245+
uv pip install pip maturin
246+
```
191247

192-
pip install nixl # Or install https://github.com/ai-dynamo/nixl from source
248+
[Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool.
193249

194-
cargo build --release
250+
5. Build the Rust bindings
195251

196-
# To install ai-dynamo-runtime from source
252+
```
197253
cd lib/bindings/python
198-
pip install .
254+
maturin develop --uv
255+
```
256+
257+
6. Install the wheel
258+
259+
```
260+
cd $PROJECT_ROOT
261+
uv pip install .
262+
```
263+
264+
Note editable (`-e`) does not work because the `dynamo` package is split over multiple directories, one per backend.
265+
266+
You should now be able to run `python -m dynamo.frontend`.
267+
268+
Remember that nats and etcd must be running (see earlier).
269+
270+
Set the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`.
271+
272+
If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.
199273

200-
cd ../../../
201-
pip install ".[all]"
274+
### Deployment to Kubernetes
202275

203-
Follow the [Quickstart Guide](docs/guides/dynamo_deploy/quickstart.md)
276+
Follow the [Quickstart Guide](docs/guides/dynamo_deploy/quickstart.md) to deploy to Kubernetes.
204277

205-
```
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-License-Identifier: Apache-2.0
3+
4+
apiVersion: nvidia.com/v1alpha1
5+
kind: DynamoGraphDeployment
6+
metadata:
7+
name: sglang-agg
8+
spec:
9+
services:
10+
Frontend:
11+
livenessProbe:
12+
httpGet:
13+
path: /health
14+
port: 8000
15+
initialDelaySeconds: 60
16+
periodSeconds: 60
17+
timeoutSeconds: 30
18+
failureThreshold: 10
19+
readinessProbe:
20+
exec:
21+
command:
22+
- /bin/sh
23+
- -c
24+
- "exit 0"
25+
initialDelaySeconds: 60
26+
periodSeconds: 60
27+
timeoutSeconds: 30
28+
failureThreshold: 10
29+
dynamoNamespace: sglang-agg
30+
componentType: main
31+
replicas: 1
32+
resources:
33+
requests:
34+
cpu: "5"
35+
memory: "10Gi"
36+
limits:
37+
cpu: "5"
38+
memory: "10Gi"
39+
extraPodSpec:
40+
mainContainer:
41+
image: my-registry/sglang-runtime:my-tag
42+
workingDir: /workspace/components/backends/sglang
43+
command: ["sh", "-c"]
44+
args:
45+
- "python3 -m dynamo.sglang.utils.clear_namespace --namespace dynamo && python3 -m dynamo.frontend"
46+
SGLangDecodeWorker:
47+
envFromSecret: hf-token-secret
48+
livenessProbe:
49+
exec:
50+
command:
51+
- /bin/sh
52+
- -c
53+
- "exit 0"
54+
periodSeconds: 60
55+
timeoutSeconds: 30
56+
failureThreshold: 10
57+
readinessProbe:
58+
exec:
59+
command:
60+
- /bin/sh
61+
- -c
62+
- "exit 0"
63+
initialDelaySeconds: 60
64+
periodSeconds: 60
65+
timeoutSeconds: 30
66+
failureThreshold: 10
67+
dynamoNamespace: sglang-agg
68+
componentType: worker
69+
replicas: 1
70+
resources:
71+
requests:
72+
cpu: "10"
73+
memory: "20Gi"
74+
gpu: "1"
75+
limits:
76+
cpu: "10"
77+
memory: "20Gi"
78+
gpu: "1"
79+
extraPodSpec:
80+
mainContainer:
81+
image: my-registry/sglang-runtime:my-tag
82+
workingDir: /workspace/components/backends/sglang
83+
args:
84+
- "python3"
85+
- "-m"
86+
- "dynamo.sglang.worker"
87+
- "--model-path"
88+
- "deepseek-ai/DeepSeek-R1-Distill-Llama-8B"
89+
- "--served-model-name"
90+
- "deepseek-ai/DeepSeek-R1-Distill-Llama-8B"
91+
- "--page-size"
92+
- "16"
93+
- "--tp"
94+
- "1"
95+
- "--trust-remote-code"
96+
- "--skip-tokenizer-init"

components/backends/sglang/launch/agg.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,4 @@ python3 -m dynamo.sglang.worker \
2525
--page-size 16 \
2626
--tp 1 \
2727
--trust-remote-code \
28-
--skip-tokenizer-init \
28+
--skip-tokenizer-init

0 commit comments

Comments
 (0)