-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tokenizer Serialization
] Fix the broken serialisation
#27099
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @ArthurZucker
Pegasus is the only slow failure I witnessed so checking this now before merging! |
Ok, the issue is that when we force the added tokens encoder in the slow tokenizer, the fast of course can't do this. So the eos token gets replaced at index 0 in slow but not in fast. |
* nits * nits * actual fix * style * ze fix * fix fix fix style
…#27099) * nits * nits * actual fix * style * ze fix * fix fix fix style
…#27099) * nits * nits * actual fix * style * ze fix * fix fix fix style
What does this PR do?
Should fix some serialization issues, mostly save_pretrained with all the init kwargs, and from_pretrained with dicts.
fixes #26732
With main:
This is because the tokenizer had special tokens saved as dicts, and the call to
convert_added_tokens
. is made after this.