Skip to content

Conversation

@Pouyanpi
Copy link
Collaborator

Summary

Add automatic provider name detection from LLM module paths to eliminate manual provider specification. The implementation extracts provider names from LangChain package naming conventions (e.g.,langchain_openai → openai) and handles edge cases including community packages, wrapped classes, and multiple inheritance through MRO traversal.

  • Add _infer_provider_from_module() to extract provider names from LangChain package conventions
  • Add public get_llm_provider() API for retrieving provider names
  • Update _setup_llm_call_info() to automatically infer provider when not explicitly provided
  • Add comprehensive test suite covering standard packages, community packages, wrapped classes, and inheritance patterns

Test Plan

  • All 9 new tests pass covering various scenarios:
    • Standard providers (OpenAI, Anthropic, NVIDIA AI Endpoints)
    • Community packages (Ollama)
    • Unknown/custom packages
    • Patched/wrapped classes
    • Multiple inheritance patterns
    • Deep inheritance hierarchies
  • Existing tests continue to pass
  • Provider name is correctly inferred when model_provider parameter is None

@Pouyanpi Pouyanpi changed the title feat(llm): Add automatic provider inference for LangChain LLMs feat(llm): add automatic provider inference for LangChain LLMs Oct 17, 2025
@Pouyanpi Pouyanpi requested a review from tgasser-nv October 17, 2025 12:06
@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 80.76923% with 5 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
nemoguardrails/actions/llm/utils.py 80.76% 5 Missing ⚠️

📢 Thoughts on this report? Let us know!

@Pouyanpi
Copy link
Collaborator Author

@greptileai

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, no comments

Edit Code Review Agent Settings | Greptile

@Pouyanpi Pouyanpi requested a review from trebedea October 21, 2025 07:46
Add automatic provider name detection from LLM module paths to eliminate
manual provider specification. The implementation extracts provider
names from LangChain package naming conventions (e.g.,langchain_openai →
openai) and handles edge cases including community packages, wrapped
classes, and multiple inheritance through MRO traversal.
@Pouyanpi Pouyanpi force-pushed the feat/llm-provider-inference branch from c77ccf5 to 9ae8a0f Compare October 21, 2025 07:55
@Pouyanpi Pouyanpi added this to the v0.18.0 milestone Oct 21, 2025
@Pouyanpi Pouyanpi added the enhancement New feature or request label Oct 21, 2025
@Pouyanpi Pouyanpi self-assigned this Oct 21, 2025
@Pouyanpi Pouyanpi requested a review from Copilot October 21, 2025 15:37
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds automatic provider inference for LangChain LLMs by extracting provider names from module paths, eliminating the need for manual provider specification. The implementation follows LangChain's package naming conventions (e.g., langchain_openaiopenai) and handles edge cases through MRO traversal.

Key Changes:

  • Added _infer_provider_from_module() to extract provider names from LangChain package naming patterns
  • Added public get_llm_provider() API for external provider name retrieval
  • Updated _setup_llm_call_info() to automatically infer provider when model_provider is None

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
nemoguardrails/actions/llm/utils.py Implements provider inference logic and integrates it into the LLM call setup flow
tests/test_actions_llm_utils.py Comprehensive test suite covering standard providers, community packages, wrapped classes, and inheritance patterns

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Copy link
Member

@trebedea trebedea left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@Pouyanpi Pouyanpi merged commit 39d74ad into develop Oct 22, 2025
16 checks passed
@Pouyanpi Pouyanpi deleted the feat/llm-provider-inference branch October 22, 2025 12:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants