Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Palm, Claude-2, Cohere, Azure OpenAI Llama2, CodeLlama (100+LLMs) - using LiteLLM #834

Closed

Conversation

ishaan-jaff
Copy link

@ishaan-jaff ishaan-jaff commented Sep 5, 2023

This PR adds support for the above mentioned LLMs using LiteLLM https://github.com/BerriAI/litellm/
Example

from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages)

# anthropic call
response = completion(model="claude-instant-1", messages=messages)

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@ishaan-jaff
Copy link
Author

@haileyschoelkopf @lintangsutawika can I get a review on this PR ?

happy to add more docs/testing on this if the initial commit looks good to you !

@lintangsutawika
Copy link
Contributor

This is an interesting PR, but I think it would be better if litelm would be in its own python class like litelm.py and have the user call it through --model litelm.

Wdyt?

@StellaAthena
Copy link
Member

I do not think we are going to be replacing the official OpenAI API call with yours. It does look like you support some APIs we don't such as Jurassic and aleph alpha, but this should really be implemented as a stand-alone module rather than as a middlewear betweent he Eval Harness and the OpenAI API

@keyboardAnt
Copy link

I do not think we are going to be replacing the official OpenAI API call with yours.

@StellaAthena, what are your thoughts on adding @ishaan-jaff's litellm alongside the official OpenAI API, rather than replacing it? (with a unique extra)

@StellaAthena
Copy link
Member

StellaAthena commented Nov 28, 2023

I do not think we are going to be replacing the official OpenAI API call with yours.

@StellaAthena, what are your thoughts on adding @ishaan-jaff's litellm alongside the official OpenAI API, rather than replacing it? (with a unique extra)

That is something I am a lot more amenable to. It may also make more sense at a design level to port over the implementations we are missing individually instead of establishing this as official middleware.

@StellaAthena
Copy link
Member

Closing as abandoned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants