-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exclude set of tokens from rep penalty calculation #1170
Comments
The repetition penalty is a one-off thing i.e. it doesn't matter how many time the tokens is present the penalty will be the same. I suppose you CAN get unbounded generation but I'm not sure if it DOES happen. The probability associeted with these two tokens when generating are so high that the penalty/sampling should not matter. Do you have an experiment that could verify that this is indeed hapening? |
I havent noticed it in my own inference of teknium/OpenHermes-2-Mistral-7B, despite it using EOS as a turn delimiter <|im_end|>, but according to this HF employee, LMSys has something you could theoretically test it on. But I wasnt aware of the one-off thing when I started these feature requests, and thus, if it makes it through 2 turns, it should never become a problem. Does Rep penalty fall off further away from when the token was seen last? I speculated that it may only be for short token turns that this appears - if it does at all - so maybe in roleplay or very terse answering models this may be a problem if recency is a big factor |
No, it's constant. It could be interesting to add a windowing on it indeed. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Feature request
Hello, I would like to propose a feature that allows you to set a list of tokens, or even token sequences, that can be excluded from repetition penalty calculations.
The reasoning for this being that, given a prompt format for multiturn, such as:
Or even worse, a format like ChatML, where it is in now standard case using <|im_end|> as a stopping token and included in every turn, it seems only logical that given these tokens all appear in every turn, that, especially in short token turn sequences, repetition penalty will destroy the validity of these prompt formats.
Motivation
Your contribution
I will soon attempt a solution but I really think I will get it wrong, nonetheless, here is my theory on how to do it in TGI:
This seems to be the rep penalty code:
I'm thinking we take the input_id's before getting scored and simply replacing it with input_id's that remove any from some variable setting a list of token ids or token strings->id's.
The text was updated successfully, but these errors were encountered: