Skip to content

Commit

Permalink
[Bugfix] Fix null modules_to_not_convert in FBGEMM Fp8 quantization (
Browse files Browse the repository at this point in the history
…vllm-project#6665)

Signed-off-by: Alvant <alvasian@yandex.ru>
  • Loading branch information
cli99 authored and Alvant committed Oct 26, 2024
1 parent 17ea782 commit bbdf873
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion vllm/model_executor/layers/quantization/fbgemm_fp8.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ class FBGEMMFp8Config(QuantizationConfig):
"""Config class for FBGEMM Fp8."""

def __init__(self, ignore_list: List[str], input_scale_ub: float):
self.ignore_list = ignore_list
self.ignore_list = ignore_list if ignore_list else []
self.input_scale_ub = input_scale_ub

# For GPUs that lack FP8 hardware support, we can leverage the Marlin
Expand Down

0 comments on commit bbdf873

Please sign in to comment.