-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: add bypass_n option to LangchainLLMWrapper for n-completion control #2354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @SimFG
Could you please also add tests relevant to bypass_n
Add unit tests that verify:
- When
bypass_n=True
, the wrapper doesn't pass n to the underlying LLM - When
bypass_n=False
(default), behavior remains unchanged - Both sync (
generate_text
) and async (agenerate_text
) methods work correctly
Co-authored-by: Ani <5357586+anistark@users.noreply.github.com>
Add comprehensive test cases to verify the behavior of bypass_n parameter in LangchainLLMWrapper. Tests cover both sync and async methods, default behavior, and interaction with multiple completion support.
@anistark Thanks a lot for your time, I have added some relative unit test cases. |
I have updated the pull request, thanks your careful review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes @SimFG 👏🏼
When using some openai-compatible thinking models at that time, it was found that centain errors would occur.
2025-10-10 11:22:45,625 - ragas.executor - ERROR - Exception raised in Job[3]: BadRequestError(Error code: 400 - {'error': {'code': 'InvalidParameter', 'message': 'Reasoning model does not support n > 1, logit_bias, logprobs, top_logprobs Request id: xxx', 'param': '', 'type': 'BadRequest'}})