Commit 6fb8ab7
authored
Increase output token limit for EquivalenceEvaluator (#6835)
EquivalenceEvaluator was specifying MaxOutputTokens = 1 since its prompt instructs the LLM to produce a response (score) that is a single digit (between 1 and 5).
Turns out that while this works for most models (including the OpenAI models that were used to test the prompt), some models require more than one token for this. For example, looks like Claude requires two tokens for this - see #6814).
This PR bumps the MaxOutputTokens to 5 to address the above issue.
Fixes #68141 parent 6e61568 commit 6fb8ab7
File tree
1 file changed
+1
-1
lines changed- src/Libraries/Microsoft.Extensions.AI.Evaluation.Quality
1 file changed
+1
-1
lines changedLines changed: 1 addition & 1 deletion
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
52 | 52 | | |
53 | 53 | | |
54 | 54 | | |
55 | | - | |
| 55 | + | |
56 | 56 | | |
57 | 57 | | |
58 | 58 | | |
| |||
0 commit comments