Skip to content

Commit a46b7b3

Browse files
committed
TEMP: disable nested torch compilation
Signed-off-by: ProExpertProg <lgovedic@redhat.com>
1 parent b07636d commit a46b7b3

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/model_executor/layers/fused_moe/fused_moe.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1126,7 +1126,7 @@ def fused_topk_bias(
11261126

11271127

11281128
# This is used by the Deepseek-V2 and Deepseek-V3 model
1129-
@torch.compile(dynamic=True, backend=current_platform.simple_compile_backend)
1129+
# @torch.compile(dynamic=True, backend=current_platform.simple_compile_backend)
11301130
def grouped_topk(
11311131
hidden_states: torch.Tensor,
11321132
gating_output: torch.Tensor,

0 commit comments

Comments
 (0)