You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work! I met some problems when using fastchat/train/train.py to fine-tune a llama-2-7b by using llama-2's conversation template.
I have changed the get_conversation_template("vicuna") to get_conversation_template("llama-2"), and delated assert conv.sep_style == SeparatorStyle.ADD_COLON_TWO. However, the tokenization mismatch warning was reported, and the training loss was always 0.
WARNING: tokenization mismatch: 78 vs. 80.
#turn = 1. (ignored)
Could you please tell me how to adapt the code to llama-2's conversation template? Thanks a lot!
The text was updated successfully, but these errors were encountered:
Thanks for your great work! I met some problems when using fastchat/train/train.py to fine-tune a llama-2-7b by using llama-2's conversation template.
I have changed the
get_conversation_template("vicuna")
toget_conversation_template("llama-2")
, and delatedassert conv.sep_style == SeparatorStyle.ADD_COLON_TWO
. However, the tokenization mismatch warning was reported, and the training loss was always 0.Could you please tell me how to adapt the code to llama-2's conversation template? Thanks a lot!
The text was updated successfully, but these errors were encountered: