Skip to content

Commit e073438

Browse files
authored
[Bugfix] Fix MoeWNA16Method activation (#14024)
1 parent f58f8b5 commit e073438

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm/model_executor/layers/quantization/moe_wna16.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -293,9 +293,10 @@ def apply(
293293
custom_routing_function: Optional[Callable] = None,
294294
scoring_func: str = "softmax",
295295
e_score_correction_bias: Optional[torch.Tensor] = None,
296+
activation: str = "silu",
296297
) -> torch.Tensor:
297298
from vllm.model_executor.layers.fused_moe import fused_experts
298-
299+
assert activation == "silu", "Only SiLU activation is supported."
299300
topk_weights, topk_ids = FusedMoE.select_experts(
300301
hidden_states=x,
301302
router_logits=router_logits,

0 commit comments

Comments
 (0)