Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix saving of generation_config for Llama-3 #1134

Merged
merged 3 commits into from
Apr 25, 2024

Conversation

eldarkurtic
Copy link
Contributor

The existing version of HuggingFaceCheckpointer does not respect model's generation_config.json and during saving it initializes it from the config.json. In the case of Llama-3 model, there is a discrepancy with eos_token_id which is set to 128001 in config.json but to [128001, 128009] in generation_config.json. This means that Llama-3 models saved with HuggingFaceCheckpointer have "eos_token_id": 128001 instead of "eos_token_id": [128001, 128009],.

This creates problems when we want to use Llama-3 models produced with llm-foundry as they will most likely always generate text until the max number of tokens is exhausted instead of stopping at 128009 token.

eldarkurtic added a commit to IST-DASLab/llm-foundry that referenced this pull request Apr 24, 2024
Copy link
Collaborator

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix! Please run pre-commit and then we can merge

@dakinggg dakinggg merged commit 15abf8c into mosaicml:main Apr 25, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants