-
Notifications
You must be signed in to change notification settings - Fork 615
[WIP] HuggingFaceModelTokenizer #2723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2723
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @krammnic for taking this one on! This will be huge for lowering the barrier to onboard new models. Let's definitely make sure to add unit tests for this one. (You can likely create some dummy tokenizer_config.json
files and check them directly into the repo, since they should be pretty small.)
special_tokens_mapping = {} | ||
for token in self.special_tokens: | ||
special_tokens_mapping[token] = self.base_tokenizer.encode(token) | ||
rendered_template = self.template.render( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow this actually wound up being quite easy lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, tool calling will still be quite tricky
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krammnic Other than the lack of tool calls in tt Message class, is there any other reasons behind why tool calling will be tricky?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably, no
if content := token_info.get("content"): | ||
special_tokens.add(content) | ||
|
||
# We sort lexicographically in order to get real tokens after all <|dummy_x|> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't fully understand this comment. I assume this is referring to reserved special tokens? If so, why is string sort the thing to use here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we can drop it, it might just simplify debugging in case if we face some problems with new configs.
self.base_tokenizer = HuggingFaceBaseTokenizer( | ||
tokenizer_json_path=tokenizer_json_path, | ||
tokenizer_config_json_path=tokenizer_config_json_path, | ||
generation_config_path=generation_config_path, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know @joecummings had some thoughts on whether we should use generic base_tokenizer
instead of constraining to use HuggingFaceBaseTokenizer
. I suspect the latter is better for making sure everything works together, but I know at least Qwen2Tokenizer
still relies on the merges + vocab files instead of the tokenizer.json
file (I alluded to this at the very bottom of #2706). So we should figure out if this will work for that case
{"role": m.role, "content": m.content[0]["content"]} for m in messages | ||
], | ||
add_generation_prompt=add_eos, | ||
**special_tokens_mapping, # We assume that the naming is consitent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think this should be a reasonable assumption (as long as we are also getting the special_tokens from the same place as the template)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your patience! Left a handful of comments. Personally I would just add a unit test now, it'll make it easier to reason about things and help validate that this is giving the expected results.
self.truncation_type = truncation_type | ||
|
||
def _raise_helper(self, msg): | ||
raise Exception(msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to use this instead of the more specific Jinja Template Error used by HF here?
def _raise_helper(self, msg): | ||
raise Exception(msg) | ||
|
||
def _get_token_from_config(self, config: Dict[str, Any], key: str) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused by this method. Based on the docstring and implementation it seems like it is being used to get special tokens. But then you are only using it for chat_template, which iiuc is always a string.
messages=current_messages, | ||
add_generation_prompt=add_eos if i == len(messages) - 1 else False, | ||
**special_tokens_mapping, # We assume that the naming is consistent | ||
**self.top_level_variables, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noob q: are top-level variables always sufficient? (I.e. is there ever a case where HF templates based on some nested field?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, in some configs the special token might be bos_id
, for instance, but the bos
is used in the chat_template, which is redefined as the top-level variable. Generally, we know that there is a possibility to pass some extra variables. Therefore, it is better to prevent some errors here with such trick.
|
||
rendered = self.template.render( | ||
messages=current_messages, | ||
add_generation_prompt=add_eos if i == len(messages) - 1 else False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the implications of this? E.g. during finetuning do we actually want to add a generation prompt at the end of all the messages? (I would assume no)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, bad point, we should have this argument, in another case we will not be able to render.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I think you're right, it is a bad point. I think I may have misread this line.. my point was that we should only add the generation prompt during inference, but I see that add_eos
is basically a proxy for inference
# This part is extremely hacky, but we need to handle case where we have variable access with jinja | ||
special_tokens_mapping = {} | ||
for token in self.special_tokens: | ||
special_tokens_mapping[token] = self.base_tokenizer.encode(token) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I fully understand this comment. I also don't understand why we need to rebuild the special tokens mapping on every invocation of tokenize_messages
. (Is that what the comment is referring to?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comment became a legacy during the changes, good catch
if message.masked: | ||
tokenized_messages.extend([True] * len(delta)) | ||
else: | ||
tokenized_messages.extend(delta) | ||
|
||
mask.extend([message.masked] * len(delta)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem right to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops
tokenized_messages = truncate( | ||
tokens=tokenized_messages, | ||
max_seq_len=max_seq_len, | ||
eos_id=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also be adding eos here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should
current_messages = [ | ||
{"role": m.role, "content": m.content[0]["content"]} | ||
for m in messages[: i + 1] | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a bit strange to me.. in my mind we should either be able to (a) render/tokenize all messages in one shot, or (b) loop over messages one at a time, render, tokenize, and concat. Why do we need to do this "cumulative tokenization"? (Is it because of the difficulties you mentioned with masking? If so I wonder whether there is an alternative)
Let me push unit test and we will iterate on this one more time |
@ebsmothers Let's iterate |
Context
What is the purpose of this PR? Is it to
Please link to any issues this PR addresses.
Changelog
What are the changes made in this PR?
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example
Basically, this is a first pass (I'm thinking about how to add masking), but jinja render works surprisingly well.