-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hybrid tokenizers #25
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
model2vec/distill/distillation.py
Outdated
:param device: The device to use. | ||
:param pca_dims: The number of components to use for PCA. If this is None, we don't apply PCA. | ||
:param apply_zipf: Whether to apply Zipf weighting to the embeddings. | ||
:param use_subword: Whether to keep subword tokens in the vocabulary. If this is False, you must pass a vocabulary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also changes the actual tokenizer from subword to word level right? I would also specify that in the description
model2vec/distill/distillation.py
Outdated
:param use_subword: Whether to keep subword tokens in the vocabulary. If this is False, you must pass a vocabulary. | ||
:raises: ValueError if the PCA dimension is larger than the number of dimensions in the embeddings. | ||
:raises: ValueError if the vocabulary contains duplicate tokens. | ||
:return: A StaticModdel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:return: A StaticModdel | |
:return: A StaticModel. |
No description provided.