Skip to content

Commit

Permalink
Revert "fix opt fc1/fc2 layer modules should not be quantized (#118)" (
Browse files Browse the repository at this point in the history
…#149)

This reverts commit c9a0688.
  • Loading branch information
Qubitium authored Jul 2, 2024
1 parent cd80805 commit 83c002d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions gptqmodel/models/opt.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ class OPTGPTQ(BaseGPTQModel):
layer_modules = [
["self_attn.k_proj", "self_attn.v_proj", "self_attn.q_proj"],
["self_attn.out_proj"],
# ["fc1"], disabled: not a good candidate for quantization
# ["fc2"], disabled: not a good candidate for quantization
["fc1"],
["fc2"],
]

0 comments on commit 83c002d

Please sign in to comment.