Skip to content

Commit dcdd1e5

Browse files
committed
tweak comment
Signed-off-by: Bill Nell <bnell@redhat.com>
1 parent 84b48b3 commit dcdd1e5

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

vllm/model_executor/layers/fused_moe/shared_fused_moe.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ def __init__(
2525
super().__init__(**kwargs)
2626
self._shared_experts = shared_experts
2727
# Disable shared expert overlap if EP is disabled or we are not using
28-
# flashinfer + DP since there is nothing to be gained in this case
29-
# and it prevents the shared experts from being hidden from
30-
# torch.compile.
28+
# flashinfer + DP since there is nothing to be gained in this case.
29+
# Disabling the overlap optimization also prevents the shared experts
30+
# from being hidden from torch.compile.
3131
self.use_overlapped = use_overlapped and not (
3232
self.use_ep or self.use_flashinfer_cutlass_kernels
3333
) and self.shared_experts is not None

0 commit comments

Comments
 (0)