-
Notifications
You must be signed in to change notification settings - Fork 1.2k
RFC: automatically use litellm if possible #534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally really like this idea!
from ...models.interface import Model, ModelProvider | ||
from .litellm_model import LitellmModel | ||
|
||
DEFAULT_MODEL: str = "gpt-4.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would importing https://github.com/openai/openai-agents-python/blob/v0.0.11/src/agents/models/openai_provider.py#L11 instead be better? Also, huge 👍 to switching from gtp-4o to gpt-4.1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels like a minor breaking change. Though probably fine! Good call.
|
||
def _create_fallback_provider(self, prefix: str) -> ModelProvider: | ||
if prefix == "litellm": | ||
from ..extensions.models.litellm_provider import LitellmProvider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this dynamic import here!
nice-to-have: on the LitellmProvider side, having try/except clause for loading litellm module and raising a more user-friendly error message than exposing the missing litellm could make dev experience better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@seratch - LitellmProvider will try to import LitellmModel, which will display the nice error message: https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/models/litellm_model.py#L16-L19
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, you’re already ahead!
from .openai_provider import OpenAIProvider | ||
|
||
|
||
class MultiProviderMap: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this class to encapsulate the simple operations on plain dict?
Co-authored-by: Steven Heidel <steven@heidel.ca>
Summary
This replaces the default model provider with a
MultiProvider
, which has the logic:openai/
or doesn't contain "/", use OpenAIlitellm/
, use LiteLLM to use the appropriate model provider.It's also extensible, so users can create their own mappings. I also imagine that if we natively supported Anthropic/Gemini etc, we can add it to MultiProvider to make it work.
The goal is that it should be really easy to use any model provider. Today if you pass
model="gpt-4.1"
, it works great. Butmodel="claude-sonnet-3.7"
doesn't. If we can make it that easy, it's a win for devx.I'm not entirely sure if this is a good idea - is it too magical? Is the API too reliant on litellm? Comments welcome.
Test plan
For now, the example. Will add unit tests if we agree its worth mergin.