-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add_tokens does not preserve spacing #28384
Comments
Few things can come into play. By default the token will be normalized, this means that the normalizer will be adding a prefix space to that token. When decoding, that space is removed. You should add the token using |
This time there is an extra space that appears after the token:
Here are the individual tokens if helps:
|
That is also expected, you should either use a slow tokenizer ( |
i.e. I tried adding ' Deniz' or '▁Deniz' thinking it would better match the behavior of regular tokens like '▁Arthur' but no success. So far I haven't been able to find a combination of options that will preserve spacing. Will check out your fix next. |
This is the combination that works: In [44]: mtok = AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1', use_fast=False, legacy = False)
In [45]: mtok.tokenize('Arthur met Deniz today.')
Out[45]: ['▁Arthur', '▁met', '▁Den', 'iz', '▁today', '.']
In [46]: mtok.add_tokens([AddedToken('Deniz', normalized=False, special=False)])
Out[46]: 1
In [47]: mtok.tokenize('Arthur met Deniz today.')
Out[47]: ['▁Arthur', '▁met', '▁', 'Deniz', '▁today', '.']
```� |
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
System Info
transformers
version: 4.35.2Who can help?
@ArthurZucker and @younesbelkada
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Expected behavior
Spaces shoud be preserved when a text is encoded/decoded.
The text was updated successfully, but these errors were encountered: