Skip to content

Commit

Permalink
Update logits_process.py docstrings (huggingface#25971)
Browse files Browse the repository at this point in the history
  • Loading branch information
larekrow authored and EduardoPach committed Nov 18, 2023
1 parent bd892aa commit fe3ac59
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/transformers/generation/logits_process.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ class RepetitionPenaltyLogitsProcessor(LogitsProcessor):
[`LogitsProcessor`] that prevents the repetition of previous tokens through an exponential penalty. This technique
shares some similarities with coverage mechanisms and other aimed at reducing repetition. During the text
generation process, the probability distribution for the next token is determined using a formula that incorporates
token scores based on their occurrence in the generated sequence. Tokens with higher scores are less likely to be
token scores based on their occurrence in the generated sequence. Tokens with higher scores are more likely to be
selected. The formula can be seen in the original [paper](https://arxiv.org/pdf/1909.05858.pdf). According to the
paper a penalty of around 1.2 yields a good balance between truthful generation and lack of repetition.
Expand Down Expand Up @@ -328,7 +328,7 @@ class EncoderRepetitionPenaltyLogitsProcessor(LogitsProcessor):
hallucination_penalty (`float`):
The parameter for hallucination penalty. 1.0 means no penalty.
encoder_input_ids (`torch.LongTensor`):
The encoder_input_ids that should not be repeated within the decoder ids.
The encoder_input_ids that should be repeated within the decoder ids.
"""

def __init__(self, penalty: float, encoder_input_ids: torch.LongTensor):
Expand Down

0 comments on commit fe3ac59

Please sign in to comment.