Skip to content

Commit

Permalink
improve add_tokens docstring (huggingface#18687)
Browse files Browse the repository at this point in the history
* improve add_tokens documentation

* format
  • Loading branch information
SaulLu authored and oneraghavan committed Sep 26, 2022
1 parent f0d428f commit 681e0fd
Showing 1 changed file with 5 additions and 3 deletions.
8 changes: 5 additions & 3 deletions src/transformers/tokenization_utils_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -915,10 +915,12 @@ def add_tokens(
) -> int:
"""
Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to
it with indices starting from length of the current vocabulary.
it with indices starting from length of the current vocabulary and and will be isolated before the tokenization
algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore
not treated in the same way.
Note,None When adding new tokens to the vocabulary, you should make sure to also resize the token embedding
matrix of the model so that its embedding matrix matches the tokenizer.
Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix
of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the [`~PreTrainedModel.resize_token_embeddings`] method.
Expand Down

0 comments on commit 681e0fd

Please sign in to comment.