Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
[Kernel] Disable CUTLASS kernels for fp8 (vllm-project#5505)
Browse files Browse the repository at this point in the history
  • Loading branch information
tlrmchlsmth authored and robertgshaw2-neuralmagic committed Jun 16, 2024
1 parent 9184a05 commit ac8c1a5
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion vllm/model_executor/layers/quantization/fp8.py
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,9 @@ def apply(self,
# If dynamic, layer.input_scale is None and x_scale computed from x.
# If static, layer.input_scale is scalar and x_scale is input_scale.

if bias is None and self.cutlass_fp8_supported:
# Temporarily disable CUTLASS kernels due to an illegal memory access
#if bias is None and self.cutlass_fp8_supported:
if False:
qinput, x_scale = ops.scaled_fp8_quant(x, layer.input_scale)

# Fused GEMM_DQ
Expand Down

0 comments on commit ac8c1a5

Please sign in to comment.