Skip to content

Commit 853efb9

Browse files
Merge branch 'vllm-project:main' into main
2 parents 3cf301a + a9480d5 commit 853efb9

30 files changed

+2119
-398
lines changed

.github/workflows/vllm_ascend_test.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -278,6 +278,7 @@ jobs:
278278
pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py::test_models_distributed_QwQ
279279
pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py::test_models_distributed_DeepSeek_dbo
280280
pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py::test_models_distributed_DeepSeekV3_dbo
281+
pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py::test_models_distributed_alltoallv
281282
pytest -sv tests/e2e/multicard/test_offline_inference_distributed.py::test_models_distributed_Qwen3_W4A8DYNAMIC
282283
pytest -sv tests/e2e/multicard/test_data_parallel.py
283284
pytest -sv tests/e2e/multicard/ --ignore=tests/e2e/multicard/test_ilama_lora_tp2.py \

.github/workflows/vllm_ascend_test_310p.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ jobs:
5858
runs-on: ${{ matrix.os }}
5959
container:
6060
# TODO(yikun): Remove m.daocloud.io prefix when infra proxy ready
61-
image: swr.cn-southwest-2.myhuaweicloud.com/base_image/ascend-ci/cann:8.1.rc1-310p-ubuntu22.04-py3.10
61+
image: swr.cn-southwest-2.myhuaweicloud.com/base_image/ascend-ci/cann:8.2.rc1-310p-ubuntu22.04-py3.11
6262
env:
6363
VLLM_LOGGING_LEVEL: ERROR
6464
VLLM_USE_MODELSCOPE: True

docs/source/tutorials/single_node_300i.md

Lines changed: 76 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
# Single Node (Atlas 300I series)
22

33
```{note}
4-
This Atlas 300I series is currently experimental. In future versions, there may be behavioral changes around model coverage, performance improvement.
4+
1. This Atlas 300I series is currently experimental. In future versions, there may be behavioral changes around model coverage, performance improvement.
5+
2. Currently, the 310I series only supports eager mode and the data type is float16.
56
```
67

78
## Run vLLM on Altlas 300I series
@@ -83,7 +84,7 @@ curl http://localhost:8000/v1/completions \
8384

8485
::::
8586

86-
::::{tab-item} Qwen/Qwen2.5-7B-Instruct
87+
::::{tab-item} Qwen2.5-7B-Instruct
8788
:sync: qwen7b
8889

8990
Run the following command to start the vLLM server:
@@ -113,6 +114,36 @@ curl http://localhost:8000/v1/completions \
113114

114115
::::
115116

117+
::::{tab-item} Qwen2.5-VL-3B-Instruct
118+
:sync: qwen-vl-2.5-3b
119+
120+
Run the following command to start the vLLM server:
121+
122+
```{code-block} bash
123+
:substitutions:
124+
vllm serve Qwen/Qwen2.5-VL-3B-Instruct \
125+
--tensor-parallel-size 1 \
126+
--enforce-eager \
127+
--dtype float16 \
128+
--compilation-config '{"custom_ops":["none", "+rms_norm", "+rotary_embedding"]}'
129+
```
130+
131+
Once your server is started, you can query the model with input prompts
132+
133+
```bash
134+
curl http://localhost:8000/v1/completions \
135+
-H "Content-Type: application/json" \
136+
-d '{
137+
"prompt": "The future of AI is",
138+
"max_tokens": 64,
139+
"top_p": 0.95,
140+
"top_k": 50,
141+
"temperature": 0.6
142+
}'
143+
```
144+
145+
::::
146+
116147
::::{tab-item} Pangu-Pro-MoE-72B
117148
:sync: pangu
118149

@@ -251,6 +282,49 @@ clean_up()
251282

252283
::::
253284

285+
::::{tab-item} Qwen2.5-VL-3B-Instruct
286+
:sync: qwen-vl-2.5-3b
287+
288+
```{code-block} python
289+
:substitutions:
290+
from vllm import LLM, SamplingParams
291+
import gc
292+
import torch
293+
from vllm import LLM, SamplingParams
294+
from vllm.distributed.parallel_state import (destroy_distributed_environment,
295+
destroy_model_parallel)
296+
297+
def clean_up():
298+
destroy_model_parallel()
299+
destroy_distributed_environment()
300+
gc.collect()
301+
torch.npu.empty_cache()
302+
prompts = [
303+
"Hello, my name is",
304+
"The future of AI is",
305+
]
306+
# Create a sampling params object.
307+
sampling_params = SamplingParams(max_tokens=100, top_p=0.95, top_k=50, temperature=0.6)
308+
# Create an LLM.
309+
llm = LLM(
310+
model="Qwen/Qwen2.5-VL-3B-Instruct",
311+
tensor_parallel_size=1,
312+
enforce_eager=True, # For 300I series, only eager mode is supported.
313+
dtype="float16", # IMPORTANT cause some ATB ops cannot support bf16 on 300I series
314+
compilation_config={"custom_ops":["none", "+rms_norm", "+rotary_embedding"]}, # High performance for 300I series
315+
)
316+
# Generate texts from the prompts.
317+
outputs = llm.generate(prompts, sampling_params)
318+
for output in outputs:
319+
prompt = output.prompt
320+
generated_text = output.outputs[0].text
321+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
322+
del llm
323+
clean_up()
324+
```
325+
326+
::::
327+
254328
::::{tab-item} Pangu-Pro-MoE-72B
255329
:sync: pangu
256330

0 commit comments

Comments
 (0)