-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update baichuan chat template #2185
Conversation
fastchat/conversation.py
Outdated
sep="", | ||
sep2="</s>", | ||
stop_token_ids=[2, 195], | ||
sep2="", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sep2="", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sep2 should not be deleted because it is used in vllm
https://github.com/vllm-project/vllm/blob/79af7e96a0e2fc9f340d1939192122c3ae38ff17/vllm/entrypoints/openai/api_server.py#L92
stop_token_ids=[2, 195], | ||
stop_token_ids=[], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you clarify why the stop token IDs are being deleted? Are they no longer required?
Did you test the CLI and check whether the model can stop successfully?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested the submitted code in vllm and everything is fine and the inference results are as expected
vllm uses fschat
Why are these changes needed?
Update baichuan-13b-chat tokenizer rules
Related issue number (if applicable)
#2175
Checks
format.sh
to lint the changes in this PR.