You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.
68
-
69
-
### Building the Dynamo Base Image
58
+
1. Install etcd and nats
70
59
71
-
Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to use a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
72
-
- Docker Hub (docker.io)
73
-
- NVIDIA NGC Container Registry (nvcr.io)
74
-
- Any private registry
60
+
To co-ordinate across the data center Dynamo relies on an etcd and nats cluster. To run locally these need to be available.
75
61
76
-
We publish our images in [nvcr.io](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-dynamo/containers/vllm-runtime) and you can use them.
77
-
Alternatively you could build and push an image from source:
62
+
-[etcd](https://etcd.io/) can be run directly as `./etcd`.
The Dynamo team recommend the `uv` Python package manager, although anyway works. Install uv:
66
+
```
67
+
curl -LsSf https://astral.sh/uv/install.sh | sh
84
68
```
85
69
86
-
Notes about builds for specific frameworks:
87
-
- For specific details on the `--framework vllm` build [read about the VLLM backend](components/backends/vllm/README.md)
88
-
.
89
-
- For specific details on the `--framework tensorrtllm` build, see [Read about the TensorRT-LLM backend](components/backends/trtllm/README.md)
90
-
.
70
+
2. Select an engine
91
71
92
-
Note about AWS environments:
93
-
- If deploying Dynamo in AWS, make sure to build the container with EFA support using the `--make-efa` flag.
72
+
We publish Python wheels specialized for each of our supported engines: vllm, sglang, llama.cpp and trtllm. The examples that follow use sglang, read on for other engines.
94
73
95
-
After building, you can use this image by setting the `DYNAMO_IMAGE` environment variable to point to your built image:
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
119
99
```
120
100
121
-
### LLM Serving
101
+
If the model is not available locally it will be downloaded from HuggingFace and cached.
122
102
123
-
Dynamo provides a simple way to spin up a local set of inference
124
-
components including:
103
+
You can also pass a local path: `python -m dynamo.sglang.worker --model-path ~/llms/Qwen3-0.6B`
104
+
105
+
### Running an LLM API server
106
+
107
+
Dynamo provides a simple way to spin up a local set of inference components including:
125
108
126
109
-**OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
127
110
-**Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
128
111
-**Workers** – Set of pre-configured LLM serving engines.
129
112
130
-
To run a minimal configuration you can use a pre-configured
131
-
example.
132
-
133
-
#### Start Dynamo Distributed Runtime Services
134
-
135
-
First start the Dynamo Distributed Runtime services:
136
-
137
-
```bash
138
-
docker compose -f deploy/metrics/docker-compose.yml up -d
139
113
```
140
-
#### Start Dynamo LLM Serving Components
141
-
142
-
Next serve a minimal configuration with an http server, basic
143
-
round-robin router, and a single worker.
114
+
# Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router:
115
+
python -m dynamo.frontend [--http-port 8080]
144
116
145
-
```bash
146
-
cd examples/llm
147
-
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
117
+
# Start the vllm engine, connecting to nats and etcd to receive requests. You can run several of these,
118
+
# both for the same model and for multiple models. The frontend node will discover them.
Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them.
139
+
140
+
### Engines
141
+
142
+
In the introduction we installed the `sglang` engine. There are other options.
143
+
144
+
All of these requires nats and etcd, as well as a frontend (`python -m dynamo.frontend [--interactive]`).
145
+
146
+
# vllm
147
+
148
+
```
149
+
uv pip install ai-dynamo[vllm]
150
+
```
151
+
152
+
Run the backend/worker like this:
153
+
```
154
+
python -m dynamo.vllm --help
155
+
```
156
+
157
+
vllm attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.
158
+
159
+
To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.
160
+
161
+
# sglang
162
+
163
+
```
164
+
uv pip install ai-dynamo[sglang]
165
+
```
166
+
167
+
Run the backend/worker like this:
168
+
```
169
+
python -m dynamo.sglang.worker --help
170
+
```
171
+
172
+
You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/backend/server_arguments.html . See there to use multiple GPUs.
173
+
174
+
# TRT-LLM
175
+
176
+
This currently requires a container TODO ADD THE DOCS PLZ THANK YOU
If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to dynamo-run to enable it.
201
+
166
202
### Local Development
167
203
168
-
If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
232
+
source $HOME/.cargo/env
182
233
```
183
234
235
+
3. Create a Python virtual env:
184
236
185
-
#### Conda Environment
237
+
```
238
+
uv venv dynamo
239
+
source dynamo/bin/activate
240
+
```
186
241
187
-
Alternately, you can use a conda environment
242
+
4. Install build tools
188
243
189
-
```bash
190
-
conda activate <ENV_NAME>
244
+
```
245
+
uv pip install pip maturin
246
+
```
191
247
192
-
pip install nixl # Or install https://github.com/ai-dynamo/nixl from source
248
+
[Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool.
193
249
194
-
cargo build --release
250
+
5. Build the Rust bindings
195
251
196
-
# To install ai-dynamo-runtime from source
252
+
```
197
253
cd lib/bindings/python
198
-
pip install .
254
+
maturin develop --uv
255
+
```
256
+
257
+
6. Install the wheel
258
+
259
+
```
260
+
cd $PROJECT_ROOT
261
+
uv pip install .
262
+
```
263
+
264
+
Note editable (`-e`) does not work because the `dynamo` package is split over multiple directories, one per backend.
265
+
266
+
You should now be able to run `python -m dynamo.frontend`.
267
+
268
+
Remember that nats and etcd must be running (see earlier).
269
+
270
+
Set the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`.
271
+
272
+
If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.
199
273
200
-
cd ../../../
201
-
pip install ".[all]"
274
+
### Deployment to Kubernetes
202
275
203
-
Follow the [Quickstart Guide](docs/guides/dynamo_deploy/quickstart.md)
276
+
Follow the [Quickstart Guide](docs/guides/dynamo_deploy/quickstart.md) to deploy to Kubernetes.
0 commit comments