Skip to content

Commit b4b1ab0

Browse files
machenglong2025Angazenn
authored andcommitted
Remove unused code in fused_moe.py (vllm-project#2805)
### What this PR does / why we need it? line 408 already declared mc2_mask , remove duplicated unused code ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? CI passed with existing test. - vLLM version: v0.10.1.1 - vLLM main: vllm-project/vllm@60f0843 Signed-off-by: machenglong <machenglong_yewu@cmss.chinamobile.com>
1 parent caf169f commit b4b1ab0

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

vllm_ascend/ops/fused_moe.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -413,8 +413,6 @@ def forward(self,
413413
# When all_reduce_merge is in progress, shared_experts does not do all_reduce in mlp, but waits until shared_experts+router_experts are completed before doing all_reduce
414414
shared_hidden_states = shared_experts(hidden_states)
415415

416-
mc2_mask = forward_context.mc2_mask
417-
418416
enable_sp = _metadata_for_padding is not None and _metadata_for_padding.not_dummy_and_is_prefill
419417
tp_size = get_tensor_model_parallel_world_size()
420418
if enable_sp:

0 commit comments

Comments
 (0)