-
Notifications
You must be signed in to change notification settings - Fork 559
feat(llm): add automatic provider inference for LangChain LLMs #1460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, no comments
Add automatic provider name detection from LLM module paths to eliminate manual provider specification. The implementation extracts provider names from LangChain package naming conventions (e.g.,langchain_openai → openai) and handles edge cases including community packages, wrapped classes, and multiple inheritance through MRO traversal.
c77ccf5 to
9ae8a0f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds automatic provider inference for LangChain LLMs by extracting provider names from module paths, eliminating the need for manual provider specification. The implementation follows LangChain's package naming conventions (e.g., langchain_openai → openai) and handles edge cases through MRO traversal.
Key Changes:
- Added
_infer_provider_from_module()to extract provider names from LangChain package naming patterns - Added public
get_llm_provider()API for external provider name retrieval - Updated
_setup_llm_call_info()to automatically infer provider whenmodel_providerisNone
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| nemoguardrails/actions/llm/utils.py | Implements provider inference logic and integrates it into the LLM call setup flow |
| tests/test_actions_llm_utils.py | Comprehensive test suite covering standard providers, community packages, wrapped classes, and inheritance patterns |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
trebedea
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Summary
Add automatic provider name detection from LLM module paths to eliminate manual provider specification. The implementation extracts provider names from LangChain package naming conventions (e.g.,langchain_openai → openai) and handles edge cases including community packages, wrapped classes, and multiple inheritance through MRO traversal.
_infer_provider_from_module()to extract provider names from LangChain package conventionsget_llm_provider()API for retrieving provider names_setup_llm_call_info()to automatically infer provider when not explicitly providedTest Plan
model_providerparameter isNone