Skip to content

Commit 329c748

Browse files
machenglong2025offline0806
authored andcommitted
Remove unused code in fused_moe.py (vllm-project#2805)
### What this PR does / why we need it? line 408 already declared mc2_mask , remove duplicated unused code ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? CI passed with existing test. - vLLM version: v0.10.1.1 - vLLM main: vllm-project/vllm@60f0843 Signed-off-by: machenglong <machenglong_yewu@cmss.chinamobile.com> Signed-off-by: offline0806 <z00858301@china.huawei.com>
1 parent e4e87fa commit 329c748

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

vllm_ascend/ops/fused_moe.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -425,8 +425,6 @@ def forward(self,
425425
# When all_reduce_merge is in progress, shared_experts does not do all_reduce in mlp, but waits until shared_experts+router_experts are completed before doing all_reduce
426426
shared_hidden_states = shared_experts(hidden_states)
427427

428-
mc2_mask = forward_context.mc2_mask
429-
430428
enable_sp = _metadata_for_padding is not None and _metadata_for_padding.not_dummy_and_is_prefill
431429
tp_size = get_tensor_model_parallel_world_size()
432430
if enable_sp:

0 commit comments

Comments
 (0)