-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add LiteLLM support #549
Conversation
Each model provided by LiteLLM has its own environment variable for the API key. It‘s recommended to create a .env file and place your API key there, which LiteLLM will automatically load. Environment variables for popular LLMs: - ANTHROPIC_API_KEY - GEMINI_API_KEY - COHERE_API_KEY - MISTRAL_API_KEY
When the `model` is defined, but the openai-api-key or openai-api-base values are undefined, assume the user wants to use --litellm (but only if the litellm package can be found).
4cfbb70
to
6c4660f
Compare
6c4660f
to
399bc7e
Compare
d1ce887
to
41ca019
Compare
This is looking really great, thanks for putting in all the work! My main question is why not just use litellm, and ditch the openai client? Direct calls to openai are handled by litellm, as are OpenRouter and Azure. I had been planning on going down that path with a litellm integration. |
aider/models/model.py
Outdated
|
||
if client and not hasattr(client, "base_url"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about this... I setup LiteLLM in docker on a VM for my team to use, the idea being we can setup the models in one place any everyone has access (and all our projects), but this would require setting a URL or I guess a hacky port forward, would be nice to be able to set a URL for litellm in-case its not running locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition doesn't preclude setting a base_url for LiteLLM, but I understand why you'd think that. It's just duck-typing to detect LiteLLM client, which doesn't have a base_url
property. There's probably a better way (with more clarity) to detect that, so I'll see what I can do.
That could make sense in the future. For now, it's better to make the transition smooth for everyone by still allowing the previous usage, just in case there's a subtle difference between the two approaches? Also, I figured it was easier to get a less drastic change merged sooner. |
This prevents any confusion about the check‘s purpose.
aider/main.py
Outdated
# Support LITELLM_API_KEY, LITELLM_BASE_URL, etc. | ||
litellm_kwargs = { | ||
key.replace("LITELLM_", "").lower(): value | ||
for key, value in os.environ.items() | ||
if key.startswith("LITELLM_") | ||
} | ||
|
||
from litellm import LiteLLM | ||
client = LiteLLM(**litellm_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nkeilar please try setting LITELLM_BASE_URL
environment variable and let me know if that works for you
LITELLM_BASE_URL=http://localhost:4000 aider --model gemini
I really appreciate the work you put in here. Just wanted to give you a heads up that I've also got a litellm branch going. I am leaning towards a wholesale replacement of the openai client with litellm. I'm far enough into my branch now that it looks feasible. You can see my work in progress in #564. |
🚨 Feedback wanted!
When all of
--openai-api-key
,--openai-api-base
, and--openrouter
are not passed and the user has installed LiteLLM withpip install litellm
, Aider will default to using the LiteLLM client, which has support for many models, including Claude 3, Gemini 1.5 Pro, and Command R+.You also need to install model-specific clients, which LiteLLM will use under the hood. For example, you'll need to
pip install anthropic
to use Claude 3. To use Gemini 1.5 Pro, you'll need topip install google-generativeai
first.Since LiteLLM loads the nearest
.env
file, it's recommended to place your model-specific API key in there. For example, LiteLLM expects anANTHROPIC_API_KEY
environment variable to exist when using Claude 3. Of course, you can define the environment variable however you please, so an.env
file is optional. Find your preferred model on the LiteLLM supported models page to determine which environment variable is expected by LiteLLM based on which model you're using.Does this add dependencies to Aider?
No, the user must run
pip install litellm
for Aider to support LiteLLM.Can I use any model supported by LiteLLM?
Yes, the supported models are loaded directly from LiteLLM's Github repository. As long as you keep LiteLLM up-to-date, any new models can be used as long as LiteLLM supports them.
What model is used for the summarizer and repo-map?
If you're using Claude 3 Opus or GPT-4, Aider will use GPT-3.5 Turbo for the chat summarizer and repo-map features. Otherwise, the
--model
you defined is what's used.Has this been tested extensively?
No, I'm looking for people to try it out and leave feedback on this pull request.
Model aliases
When defining the
--model
name, you can use these aliases:The gpt-3.5 and gpt-4 aliases were already supported, but they're also supported when using the
--litellm
flag or when LiteLLM is used by default.Additional improvements
I've added some environment variables to improve
.env
file support, so I can run aider withdotenv run aider
to use them:AIDER_MODEL
environment variable is now supportedAIDER_EDIT_FORMAT
environment variable is now supported