Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions doc/source/serve/tutorials/serve-deepseek.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,11 @@ This example shows how to deploy DeepSeek R1 or V3 with Ray Serve LLM.
To run this example, install the following:

```bash
pip install "ray[llm]"
pip install "ray[llm]==2.46.0"
```

Note: Deploying DeepSeek-R1 requires at least 720GB of free disk space per worker node to store model weights.

## Deployment

### Quick Deployment
Expand Down Expand Up @@ -51,7 +53,6 @@ llm_config = LLMConfig(
"max_model_len": 16384,
"enable_chunked_prefill": True,
"enable_prefix_caching": True,
"trust_remote_code": True,
},
)

Expand Down Expand Up @@ -89,7 +90,6 @@ applications:
max_model_len: 16384
enable_chunked_prefill: true
enable_prefix_caching: true
trust_remote_code: true
import_path: ray.serve.llm:build_openai_app
name: llm_app
route_prefix: "/"
Expand Down