Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix prompt assembly for llama #952

Merged
merged 9 commits into from
Dec 14, 2023
Merged

Fix prompt assembly for llama #952

merged 9 commits into from
Dec 14, 2023

Conversation

hamelsmu
Copy link
Collaborator

@hamelsmu hamelsmu commented Dec 14, 2023

UPDATE: I found many issues with the llama templating after fixing the omission of the human message. I found the following issues:

  • EOS/BOS being applied incorrectly in between turns
  • System Message was incorrectly formatted with a [INST] where it wasn't supposed to be
  • Proper handling of multi-turn
  • etc.

I added lots of tests for the following cases:

  1. Multi turn
  2. Single turn
  3. Multi turn without system message
  4. Single turn without system message

For the tests, I put a redundant decoded version of the string so that you can see the prompt which will help with code review and understanding

@hamelsmu hamelsmu added the bug Something isn't working label Dec 14, 2023
@hamelsmu
Copy link
Collaborator Author

@winglian ok this is ready for your review

@hamelsmu hamelsmu changed the title start at index 0 for fastchat prompt assembly for llama Fix fastchat prompt assembly for llama Dec 14, 2023
@hamelsmu hamelsmu changed the title Fix fastchat prompt assembly for llama Fix prompt assembly for llama Dec 14, 2023
@hamelsmu
Copy link
Collaborator Author

@winglian I fixed quite a few issues that I found. I added a whole bunch of tests.

if (i % 2 == 0 and not self.system_message) or (
i % 2 != 0 and self.system_message
):
role = "<s> " + role
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only thing I don't love is hardcoding the BOS token here but I couldn't see a better way to fix this ATM

@jph00
Copy link

jph00 commented Dec 14, 2023

@hamelsmu you're a legend and a hero for doing this

@hamelsmu hamelsmu merged commit 5ada140 into main Dec 14, 2023
4 checks passed
}
# fmt: off
mt_ids = tokenize(multi_turn_conv)
assert decode(mt_ids) == '<s> [INST] <<SYS>>\nlorem\n<</SYS>>\n\nabc [/INST] ipsum</s><s> [INST] 123 [/INST] sit</s>'
Copy link
Contributor

@tokestermw tokestermw Dec 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at least for mistral instruct, it doesn't seem to add a BOS_ID for every turn

[BOS_ID] + 
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]

https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
and
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2

i can try adding some tests separately

Copy link
Collaborator Author

@hamelsmu hamelsmu Dec 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mistral instruct also doesn't have BOS tokens in between turns. It's probably best to separate llama and mistral prompt style?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tokestermw sorry what I mean to say is mistral instruct official template actually does NOT have BOS in between turns! If you read the link you sent carefully (I missed this too at first)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does NOT have BOS in between turns

yes, that's what i meant as well :) agree on separating llama and mistral, but maybe can do on fastchat repo (or both)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea

@casper-hansen
Copy link
Collaborator

Oh my, this is great @hamelsmu. We all mostly use ChatML for conversation by now (openorca, openhermes, dolphin), so it would be amazing to have more testing for other formats as well. The prompt formatting is something that I have seen issues with before, so it's great to see that it's being handled now!

@winglian winglian deleted the fix-human branch January 23, 2024 12:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants