Skip to content

Commit 7e6fc63

Browse files
authored
[Serve.llm] Pin Ray version for DeepSeek example (#54926)
1 parent d397290 commit 7e6fc63

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

doc/source/serve/tutorials/serve-deepseek.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,11 @@ This example shows how to deploy DeepSeek R1 or V3 with Ray Serve LLM.
1313
To run this example, install the following:
1414

1515
```bash
16-
pip install "ray[llm]"
16+
pip install "ray[llm]==2.46.0"
1717
```
1818

19+
Note: Deploying DeepSeek-R1 requires at least 720GB of free disk space per worker node to store model weights.
20+
1921
## Deployment
2022

2123
### Quick Deployment
@@ -51,7 +53,6 @@ llm_config = LLMConfig(
5153
"max_model_len": 16384,
5254
"enable_chunked_prefill": True,
5355
"enable_prefix_caching": True,
54-
"trust_remote_code": True,
5556
},
5657
)
5758

@@ -89,7 +90,6 @@ applications:
8990
max_model_len: 16384
9091
enable_chunked_prefill: true
9192
enable_prefix_caching: true
92-
trust_remote_code: true
9393
import_path: ray.serve.llm:build_openai_app
9494
name: llm_app
9595
route_prefix: "/"

0 commit comments

Comments
 (0)