-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX [quantization
/ ESM
] Fix ESM 8bit / 4bit with bitsandbytes
#29329
Conversation
@@ -377,7 +377,7 @@ def forward( | |||
if head_mask is not None: | |||
attention_probs = attention_probs * head_mask | |||
|
|||
context_layer = torch.matmul(attention_probs, value_layer) | |||
context_layer = torch.matmul(attention_probs.to(value_layer.dtype), value_layer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was needed to perform correctly inference otherwise you get dtype mismatch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do we get if we don't do this fix ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You get a dtype mismatch :/
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Thanks for the quick fix, everyone! |
…29329) * fix ESM 8bit * Apply suggestions from code review Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
What does this PR do?
Fixes: #29323
Currently on main, simply running:
Fails with an error
This is because the model pushed in
"facebook/esm2_t36_3B_UR50D"
do not contain theinv_freq
. Maybe during the HfQuantizer refactor we did not properly dealt with that specific scenario, leading to this bug for transformers > 4.37cc @SunMarc
I ran the quantization tests and they seem to all pass on my end