You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction
To reproduce this error, we can create a tokenizer and try to wrap it in the PreTrainedTokenizerFast
from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, trainers
data = [
"My first sentence",
"My second sentence",
"My third sentence is a bit longer",
"My fourth sentence is longer than the third one"
]
tokenizer = Tokenizer(models.WordLevel(unk_token="<unk>"))
trainer = trainers.WordLevelTrainer(vocab_size=10, special_tokens=["<unk>", "<pad>"])
tokenizer.pre_tokenizer = pre_tokenizers.Whitespace()
tokenizer.train_from_iterator(data, trainer=trainer)
tokenizer.enable_padding(pad_token="<pad>", pad_id=tokenizer.token_to_id("<pad>"))
tokenizer.enable_truncation(max_length=5)
print(tokenizer.encode(data[-1]).ids, tokenizer.padding)
This gives an output with len 5 and an explicit padding object
In the other hand if we load our tokenizer in the PreTrainedTokenizerFast class and print the same thing like before.
from transformers import PreTrainedTokenizerFast
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
fast_tokenizer(data)
print(tokenizer.encode(data[-1]).ids, tokenizer.padding)
This gives an output with len > 5 and None in padding
Expected behavior
The expected behavior should be the same with tokenizer before loading it in the PreTrainedTokenizerFast wrapper. It should not impact the padding and the truncation part
The text was updated successfully, but these errors were encountered:
I see that you have already proposed a fix that has been merged and that solves the problem you are pointing out. If you are happy with it, is it ok if we close this issue?
System Info
transformers
version: 4.21.0Who can help?
@LysandreJik
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
This gives an output with len 5 and an explicit padding object
This gives an output with len > 5 and None in padding
Expected behavior
The expected behavior should be the same with tokenizer before loading it in the PreTrainedTokenizerFast wrapper. It should not impact the padding and the truncation part
The text was updated successfully, but these errors were encountered: