Skip to content

Commit 3175b10

Browse files
authored
docs: Update to README.md (#2141)
Signed-off-by: Anish <80174047+athreesh@users.noreply.github.com>
1 parent 4747790 commit 3175b10

File tree

2 files changed

+148
-91
lines changed

2 files changed

+148
-91
lines changed

README.md

Lines changed: 68 additions & 91 deletions
Original file line numberDiff line numberDiff line change
@@ -21,87 +21,89 @@ limitations under the License.
2121
[![Discord](https://dcbadge.limes.pink/api/server/D92uqZRjCZ?style=flat)](https://discord.gg/D92uqZRjCZ)
2222
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ai-dynamo/dynamo)
2323

24-
| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |
24+
| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/dynamo/tree/main/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |
2525

26-
### The Era of Multi-Node, Multi-GPU
26+
# NVIDIA Dynamo
2727

28-
![GPU Evolution](./docs/images/frontpage-gpu-evolution.png)
28+
High-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.
2929

30+
## The Era of Multi-GPU, Multi-Node
3031

31-
Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.
32-
33-
![Multi Node Multi-GPU topology](./docs/images/frontpage-gpu-vertical.png)
34-
32+
<p align="center">
33+
<img src="./docs/images/frontpage-gpu-vertical.png" alt="Multi Node Multi-GPU topology" width="600" />
34+
</p>
3535

36+
Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.
3637

37-
### Introducing NVIDIA Dynamo
38-
39-
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
40-
41-
![Dynamo architecture](./docs/images/frontpage-architecture.png)
38+
Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
4239

4340
- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
4441
- **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
4542
- **LLM-aware request routing** – Eliminates unnecessary KV cache re-computation
4643
- **Accelerated data transfer** – Reduces inference response time using NIXL.
4744
- **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput
4845

49-
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
46+
<p align="center">
47+
<img src="./docs/images/frontpage-architecture.png" alt="Dynamo architecture" width="600" />
48+
</p>
49+
50+
## Framework Support Matrix
5051

52+
| Feature | vLLM | SGLang | TensorRT-LLM |
53+
|---------|----------------------|----------------------------|----------------------------------------|
54+
| [**Disaggregated Serving**](/docs/architecture/disagg_serving.md) ||||
55+
| [**Conditional Disaggregation**](/docs/architecture/disagg_serving.md#conditional-disaggregation) | 🚧 | 🚧 | 🚧 |
56+
| [**KV-Aware Routing**](/docs/architecture/kv_cache_routing.md) ||||
57+
| [**SLA-Based Planner**](/docs/architecture/sla_planner.md) || 🚧 | 🚧 |
58+
| [**Load Based Planner**](/docs/architecture/load_planner.md) || 🚧 | 🚧 |
59+
| [**KVBM**](/docs/architecture/kvbm_architecture.md) | 🚧 | 🚧 | 🚧 |
5160

61+
To learn more about each framework and their capabilities, check out each framework's README!
62+
- **[vLLM](components/backends/vllm/README.md)**
63+
- **[SGLang](components/backends/sglang/README.md)**
64+
- **[TensorRT-LLM](components/backends/trtllm/README.md)**
5265

53-
### Installation
66+
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
67+
68+
# Installation
5469

5570
The following examples require a few system level packages.
5671
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)
5772

58-
1. Install etcd and nats
73+
## 1. Initial setup
74+
75+
The Dynamo team recommends the `uv` Python package manager, although any way works. Install uv:
76+
```
77+
curl -LsSf https://astral.sh/uv/install.sh | sh
78+
```
5979

60-
To co-ordinate across the data center Dynamo relies on an etcd and nats cluster. To run locally these need to be available.
80+
### Install etcd and NATS (required)
81+
82+
To coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynamo locally, these need to be available.
6183

6284
- [etcd](https://etcd.io/) can be run directly as `./etcd`.
6385
- [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`.
6486

65-
The Dynamo team recommend the `uv` Python package manager, although anyway works. Install uv:
87+
To quickly setup etcd & NATS, you can also run:
6688
```
67-
curl -LsSf https://astral.sh/uv/install.sh | sh
89+
# At the root of the repository:
90+
docker compose -f deploy/docker-compose.yml up -d
6891
```
6992

70-
2. Select an engine
93+
## 2. Select an engine
7194

72-
We publish Python wheels specialized for each of our supported engines: vllm, sglang, llama.cpp and trtllm. The examples that follow use sglang, read on for other engines.
95+
We publish Python wheels specialized for each of our supported engines: vllm, sglang, trtllm, and llama.cpp. The examples that follow use SGLang; continue reading for other engines.
7396

7497
```
7598
uv venv venv
7699
source venv/bin/activate
77100
uv pip install pip
78101
79102
# Choose one
80-
uv pip install "ai-dynamo[sglang]"
81-
uv pip install "ai-dynamo[vllm]"
82-
uv pip install "ai-dynamo[trtllm]"
83-
uv pip install "ai-dynamo[llama_cpp]" # CPU, see later for GPU
103+
uv pip install "ai-dynamo[sglang]" #replace with [vllm], [trtllm], etc.
84104
```
85105

86-
### Running and Interacting with an LLM Locally
87-
88-
You can run a model and interact with it locally using commands below.
89-
90-
#### Example Commands
91-
92-
```
93-
python -m dynamo.frontend --interactive
94-
python -m dynamo.sglang.worker Qwen/Qwen3-4B
95-
```
96-
97-
```
98-
✔ User · Hello, how are you?
99-
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
100-
```
101-
102-
If the model is not available locally it will be downloaded from HuggingFace and cached.
103-
104-
You can also pass a local path: `python -m dynamo.sglang.worker --model-path ~/llms/Qwen3-0.6B`
106+
## 3. Run Dynamo
105107

106108
### Running an LLM API server
107109

@@ -115,7 +117,7 @@ Dynamo provides a simple way to spin up a local set of inference components incl
115117
# Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router:
116118
python -m dynamo.frontend [--http-port 8080]
117119
118-
# Start the vllm engine, connecting to nats and etcd to receive requests. You can run several of these,
120+
# Start the SGLang engine, connecting to NATS and etcd to receive requests. You can run several of these,
119121
# both for the same model and for multiple models. The frontend node will discover them.
120122
python -m dynamo.sglang.worker deepseek-ai/DeepSeek-R1-Distill-Llama-8B
121123
```
@@ -138,13 +140,17 @@ curl localhost:8080/v1/chat/completions -H "Content-Type: application/json"
138140

139141
Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them.
140142

141-
### Engines
143+
### Deploying Dynamo
144+
145+
- Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy on Kubernetes.
146+
- Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.)
147+
- Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations.
142148

143-
In the introduction we installed the `sglang` engine. There are other options.
149+
# Engines
144150

145-
All of these requires nats and etcd, as well as a frontend (`python -m dynamo.frontend [--interactive]`).
151+
Dynamo is designed to be inference engine agnostic. To use any engine with Dynamo, NATS and etcd need to be installed, along with a Dynamo frontend (`python -m dynamo.frontend [--interactive]`).
146152

147-
# vllm
153+
## vLLM
148154

149155
```
150156
uv pip install ai-dynamo[vllm]
@@ -155,26 +161,26 @@ Run the backend/worker like this:
155161
python -m dynamo.vllm --help
156162
```
157163

158-
vllm attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.
164+
vLLM attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.
159165

160166
To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.
161167

162-
# sglang
168+
## SGLang
163169

164170
```
165171
uv pip install ai-dynamo[sglang]
166172
```
167173

168174
Run the backend/worker like this:
169175
```
170-
python -m dynamo.sglang.worker --help
176+
python -m dynamo.sglang.worker --help #Note the '.worker' in the module path for SGLang
171177
```
172178

173179
You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/backend/server_arguments.html . See there to use multiple GPUs.
174180

175-
# TRT-LLM
181+
## TensorRT-LLM
176182

177-
It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running TensorRT-LLM engine.
183+
It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running the TensorRT-LLM engine.
178184

179185
> [!Note]
180186
> Ensure that you select a PyTorch container image version that matches the version of TensorRT-LLM you are using.
@@ -184,7 +190,7 @@ It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/
184190
> [!Important]
185191
> Launch container with the following additional settings `--shm-size=1g --ulimit memlock=-1`
186192
187-
## Install prerequites
193+
### Install prerequisites
188194
```
189195
# Optional step: Only required for Blackwell and Grace Hopper
190196
pip3 install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
@@ -195,7 +201,7 @@ sudo apt-get -y install libopenmpi-dev
195201
> [!Tip]
196202
> You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation [here](https://nvidia.github.io/TensorRT-LLM/installation/linux.html).
197203
198-
## Install dynamo
204+
### After installing the pre-requisites above, install Dynamo
199205
```
200206
uv pip install --upgrade pip setuptools && uv pip install ai-dynamo[trtllm]
201207
```
@@ -207,33 +213,9 @@ python -m dynamo.trtllm --help
207213

208214
To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.
209215

210-
# llama.cpp
211-
212-
To install llama.cpp for CPU inference:
213-
```
214-
uv pip install ai-dynamo[llama_cpp]
215-
```
216-
217-
To build llama.cpp for CUDA:
218-
```
219-
pip install llama-cpp-python -C cmake.args="-DGGML_CUDA=on"
220-
uv pip install uvloop ai-dynamo
221-
```
216+
# Developing Locally
222217

223-
At time of writing the `uv pip` version does not support that syntax, so use `pip` directly inside the venv.
224-
225-
To build llama.cpp for other accelerators see https://pypi.org/project/llama-cpp-python/ .
226-
227-
Download a GGUF and run the engine like this:
228-
```
229-
python -m dynamo.llama_cpp --model-path ~/llms/Qwen3-0.6B-Q8_0.gguf
230-
```
231-
232-
If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to dynamo-run to enable it.
233-
234-
### Local Development
235-
236-
1. Install libraries
218+
## 1. Install libraries
237219

238220
**Ubuntu:**
239221
```
@@ -257,36 +239,36 @@ xcrun -sdk macosx metal
257239
If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly.
258240

259241

260-
2. Install Rust
242+
## 2. Install Rust
261243

262244
```
263245
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
264246
source $HOME/.cargo/env
265247
```
266248

267-
3. Create a Python virtual env:
249+
## 3. Create a Python virtual env:
268250

269251
```
270252
uv venv dynamo
271253
source dynamo/bin/activate
272254
```
273255

274-
4. Install build tools
256+
## 4. Install build tools
275257

276258
```
277259
uv pip install pip maturin
278260
```
279261

280262
[Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool.
281263

282-
5. Build the Rust bindings
264+
## 5. Build the Rust bindings
283265

284266
```
285267
cd lib/bindings/python
286268
maturin develop --uv
287269
```
288270

289-
6. Install the wheel
271+
## 6. Install the wheel
290272

291273
```
292274
cd $PROJECT_ROOT
@@ -302,8 +284,3 @@ Remember that nats and etcd must be running (see earlier).
302284
Set the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`.
303285

304286
If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.
305-
306-
### Deployment to Kubernetes
307-
308-
Follow the [Quickstart Guide](docs/guides/dynamo_deploy/quickstart.md) to deploy to Kubernetes.
309-

components/README.md

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
<!--
2+
SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
SPDX-License-Identifier: Apache-2.0
4+
5+
Licensed under the Apache License, Version 2.0 (the "License");
6+
you may not use this file except in compliance with the License.
7+
You may obtain a copy of the License at
8+
9+
https://www.apache.org/licenses/LICENSE-2.0
10+
11+
Unless required by applicable law or agreed to in writing, software
12+
distributed under the License is distributed on an "AS IS" BASIS,
13+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
See the License for the specific language governing permissions and
15+
limitations under the License.
16+
-->
17+
18+
# Dynamo Components
19+
20+
This directory contains the core components that make up the Dynamo inference framework. Each component serves a specific role in the distributed LLM serving architecture, enabling high-throughput, low-latency inference across multiple nodes and GPUs.
21+
22+
## Supported Inference Engines
23+
24+
Dynamo supports multiple inference engines (with a focus on SGLang, vLLM, and TensorRT-LLM), each with their own deployment configurations and capabilities:
25+
26+
- **[vLLM](backends/vllm/README.md)** - High-performance LLM inference with native KV cache events and NIXL-based transfer mechanisms
27+
- **[SGLang](backends/sglang/README.md)** - Structured generation language framework with ZMQ-based communication
28+
- **[TensorRT-LLM](backends/trtllm/README.md)** - NVIDIA's optimized LLM inference engine with TensorRT acceleration
29+
30+
Each engine provides launch scripts for different deployment patterns in their respective `/launch` & `/deploy` directories.
31+
32+
## Core Components
33+
34+
### [Backends](backends/)
35+
36+
The backends directory contains inference engine integrations and implementations, with a key focus on:
37+
38+
- **vLLM** - Full-featured vLLM integration with disaggregated serving, KV-aware routing, and SLA-based planning
39+
- **SGLang** - SGLang engine integration supporting disaggregated serving and KV-aware routing
40+
- **TensorRT-LLM** - TensorRT-LLM integration with disaggregated serving capabilities
41+
42+
43+
### [Frontend](frontend/)
44+
45+
The frontend component provides the HTTP API layer and request processing:
46+
47+
- **OpenAI-compatible HTTP server** - RESTful API endpoint for LLM inference requests
48+
- **Pre-processor** - Handles request preprocessing and validation
49+
- **Router** - Routes requests to appropriate workers based on load and KV cache state
50+
- **Auto-discovery** - Automatically discovers and registers available workers
51+
52+
### [Router](router/)
53+
54+
A high-performance request router written in Rust that:
55+
56+
- Routes incoming requests to optimal workers based on KV cache state
57+
- Implements KV-aware routing to minimize cache misses
58+
- Provides load balancing across multiple worker instances
59+
- Supports both aggregated and disaggregated serving patterns
60+
61+
### [Planner](planner/)
62+
63+
The planner component monitors system state and dynamically adjusts worker allocation:
64+
65+
- **Dynamic scaling** - Scales prefill/decode workers up and down based on metrics
66+
- **Multiple backends** - Supports local (circus-based) and Kubernetes scaling
67+
- **SLA-based planning** - Ensures inference performance targets are met
68+
- **Load-based planning** - Optimizes resource utilization based on demand
69+
70+
## Getting Started
71+
72+
To get started with Dynamo components:
73+
74+
1. **Choose an inference engine** from the supported backends
75+
2. **Set up required services** (etcd and NATS) using Docker Compose
76+
3. **Configure** your chosen engine using Python wheels or building an image
77+
4. **Run deployment scripts** from the engine's launch directory
78+
5. **Monitor performance** using the metrics component
79+
80+
For detailed instructions, see the README files in each component directory and the main [Dynamo documentation](../../docs/).

0 commit comments

Comments
 (0)