Skip to content

Conversation

@ttanzhiqiang
Copy link
Contributor

What this PR does / why we need it?

Fixed ep=1 etp=16 bug #971, Refer to #863 this pr

Does this PR introduce any user-facing change?

Added etp logic branch in deepseekv2 and fused_moe

How was this patch tested?

nohup python -m vllm.entrypoints.openai.api_server --model=/mnt/deepseek/DeepSeek-R1-W8A8-VLLM
--trust-remote-code
--distributed-executor-backend=mp
-tp=16
-dp=1
--port 8006
--max-num-seqs 24
--max-model-len 32768
--max-num-batched-tokens 32768
--block-size 128
--enable-expert-parallel
--compilation_config 0
--gpu-memory-utilization 0.96
--additional-config '{"expert_tensor_parallel_size":1, "ascend_scheduler_config":{}}' &> run.log &
image

nohup python -m vllm.entrypoints.openai.api_server --model=/mnt/deepseek/DeepSeek-R1-W8A8-VLLM
--trust-remote-code
--distributed-executor-backend=mp
-tp=16
-dp=1
--port 8006
--max-num-seqs 24
--max-model-len 32768
--max-num-batched-tokens 32768
--block-size 128
--enable-expert-parallel
--compilation_config 0
--gpu-memory-utilization 0.96
--additional-config '{"expert_tensor_parallel_size":16, "ascend_scheduler_config":{}}' &> run.log &
截屏2025-05-28 16 16 42

Signed-off-by: ttanzhiqiang <389825161@qq.com>
Signed-off-by: ttanzhiqiang <389825161@qq.com>
@ttanzhiqiang
Copy link
Contributor Author

@wangxiyuan @Angazenn

Signed-off-by: ttanzhiqiang <389825161@qq.com>
ttanzhiqiang and others added 6 commits May 28, 2025 18:19
Signed-off-by: ttanzhiqiang <389825161@qq.com>
Signed-off-by: ttanzhiqiang <389825161@qq.com>
Signed-off-by: ttanzhiqiang <389825161@qq.com>
Signed-off-by: ttanzhiqiang <389825161@qq.com>
Signed-off-by: ttanzhiqiang <389825161@qq.com>
@ttanzhiqiang
Copy link
Contributor Author

The latest branch is running smoothly, vllm-ascend: commit 6eddbd2
vllm: releases/v0.9.0
nohup python -m vllm.entrypoints.openai.api_server --model=/mnt/deepseek/DeepSeek-R1-W8A8-VLLM
--trust-remote-code
--distributed-executor-backend=mp
-tp=16
-dp=1
--port 8006
--max-num-seqs 24
--max-model-len 32768
--max-num-batched-tokens 32768
--block-size 128
--enable-expert-parallel
--compilation_config 0
--gpu-memory-utilization 0.96
--additional-config '{"expert_tensor_parallel_size":1}' &> run.log &

截屏2025-05-29 22 53 30 nohup python -m vllm.entrypoints.openai.api_server --model=/mnt/deepseek/DeepSeek-R1-W8A8-VLLM \ --trust-remote-code \ --distributed-executor-backend=mp \ -tp=16 \ -dp=1 \ --port 8006 \ --max-num-seqs 24 \ --max-model-len 32768 \ --max-num-batched-tokens 32768 \ --block-size 128 \ --enable-expert-parallel \ --compilation_config 0 \ --gpu-memory-utilization 0.96 \ --additional-config '{"expert_tensor_parallel_size":16}' &> run.log & 截屏2025-05-29 22 48 55

@github-actions
Copy link

github-actions bot commented Jun 4, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@ttanzhiqiang
Copy link
Contributor Author

#1012 do it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant