-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove unnecessary pad token #2428
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! I'll need to double check that this works ok with FSDP and produces the same eval results as before this PR. Left one initial comment related to one of the test failures
batch = { | ||
'input_ids': torch.stack(inputs), | ||
'input_ids': input_ids, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the test error you are getting is because input_ids
and labels
should not be the same tensor, because labels
gets modified to put a -100 when the labels get rolled so they are aligned for the next token objective.
What does this PR do?
Instead of padding till max_seq_len, use the maximum length of the batch.
I have provided a simple and concise solution.
What issue(s) does this change relate to?