-
Notifications
You must be signed in to change notification settings - Fork 27.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LlamaForCausalLM at fp16 w/ FlashAttention gives NAN loss #27212
Comments
Hi @as3eem |
Also if you are using padding, some of the nan could be fixed by #27114 |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
I am pretty sure this is now fixed by #28142 as @pacman100 managed to make it work ! |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
System Info
transformers
version: 4.34.1Who can help?
cc: @SunMarc @ArthurZucker
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Constraint: The task was supposed to be executed in a very vanilla way without using a PEFT wrapper or trainer class so as to customize some parts in the future.
After many surveys:
[Due to fp16 data type, gradients receive value equivalent to -/+ inf and hence nan logits as well as loss]
What didn't work?
torch.cuda.amp.GradScaler()
gradient clamping
reduced learning rate
Expected behavior
receive non-nan values in logits.
The text was updated successfully, but these errors were encountered: