Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding support for Anthropic, Cohere, Replicate, Azure #172

Merged
merged 5 commits into from
Aug 6, 2023

Conversation

krrishdholakia
Copy link
Contributor

Hi @mrT23 ,

Noticed you're calling just OpenAI. I'm working on litellm (simple library to standardize LLM API Calls - https://github.com/BerriAI/litellm) and was wondering if we could be helpful.

Added support for Anthropic, Llama2, Cohere by replacing the raw openai.ChatCompletion.acreate endpoint with acompletion from litellm.

Would love to know if this helps.

@mrT23
Copy link
Collaborator

mrT23 commented Aug 3, 2023

Hi @krrishdholakia

This looks very interesting.

Some questions:

  1. Have you tested this? were you able to generate PR review with a non-openai model ? If not, how do you propose to test this (I don't have an anthropic key)
  2. Did all the models you mentioned ("Anthropic, Llama2, Cohere") support the 'system' and 'user' roles ?

In addition, token counting is a bit tricky. We need to list the number of tokens for each additional model (see pr_agent/algo/init.py).
We also need to adjust somehow the encoder we use for token counting (pr_agent/algo/token_handler.py, line 30). However, we might be able to simplify things, and just assume their encoder is similar to gpt-3.5, and use standard tiktoken

@mrT23 mrT23 self-assigned this Aug 3, 2023
@mrT23 mrT23 added the enhancement New feature or request label Aug 3, 2023
@krrishdholakia
Copy link
Contributor Author

Hey @mrT23

Thanks for the feedback

  1. I did some brief testing, but will do something more intensive to make sure it works end-to-end - I do have the relevant model keys.
  2. LiteLLM handles that for us - it translates from the OpenAI input to the required input format for the specific provider - Here's how: https://github.com/BerriAI/litellm/blob/eb475ad0be6d36f3a7dcea76f22c94177f097dc3/litellm/main.py#L69

Re: tokens, I'll make the required changes and update the PR - let's see if that works better.

@krrishdholakia
Copy link
Contributor Author

Preparing review...

10 similar comments
@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

PR Analysis

  • 🎯 Main theme: Adding support for additional language models and APIs to standardize generated responses
  • 📌 Type of PR: Enhancement
  • 🧪 Relevant tests added: No
  • Focused PR: yes, the title, description and changes are focused on adding support for multiple language models.
  • 🔒 Security concerns: No, adding support for additional models does not introduce new security issues.

PR Feedback

  • 💡 General PR suggestions: The usage of a library to standardize calls to multiple language models is a good idea and makes the code more reusable. Consider adding a comment explaining why you are using the litellm library.

@krrishdholakia
Copy link
Contributor Author

Preparing review...

5 similar comments
@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

{
"PR Analysis":{
"Main theme": "Extending the codebase with additional AI models",
"Type of PR": "Enhancement",
"Relevant tests added": "No",
"Focused PR": "yes, the title, description and code changes all relate to the addition of support for Anthropic, Cohere, and other models."
},
"PR Feedback":{
"General PR suggestions": "The change looks good and helps make the code more modular. Adding a test case that covers the new models would further improve this PR.",
"Security concerns": "No, the code changes only relate to integrating additional AI models."
}
}

@krrishdholakia
Copy link
Contributor Author

@mrT23 tested new version e2e and it works, please review and let me know if there's anything else i can do to help here.

You can also test it with llama2 https://replicate.com/replicate/llama-2-70b-chat/api

@mrT23
Copy link
Collaborator

mrT23 commented Aug 4, 2023

@krrishdholakia
Notice that to debug locally, with local printing and without publishing, just set in pr_agent/settings/configuration.toml

publish_output=false
verbosity_level=2 # 0,1,2

So you can remove the prints.

image

Other than that, PR looks very good. 👍
I will do a inner testing myself on Sunday, and merge

@okotek
Copy link
Contributor

okotek commented Aug 5, 2023

That's really cool! We shouldn't lose the ability to specify an openai org and use Azure openai though, it's being actively used.

@mrT23
Copy link
Collaborator

mrT23 commented Aug 5, 2023

PR Analysis

  • 🎯 Main theme: Adding support for Anthropic, Cohere, Replicate, Azure
  • 📌 Type of PR: Enhancement
  • 🧪 Relevant tests added: No
  • Focused PR: Yes, the PR is focused as it aims to add support for new AI models and all changes are relevant to this goal.
  • 🔒 Security concerns: No, the PR does not introduce any obvious security issues. However, it's important to ensure that the new AI models and the 'litellm' library do not have any inherent security issues.

PR Feedback

  • 💡 General PR suggestions: The PR is generally well-structured and the changes are clear. However, it would be beneficial to add some tests to ensure the new functionality works as expected. Additionally, there are several print statements that seem to be used for debugging purposes, it would be better to remove these or replace them with proper logging. Lastly, the PR could benefit from more detailed comments explaining the changes, especially in the 'ai_handler.py' file where the core changes are made.

@krrishdholakia
Copy link
Contributor Author

Preparing review...

5 similar comments
@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

PR Analysis

  • 🎯 Main theme: Adding support for Anthropic, Cohere, Replicate, Azure
  • 📌 Type of PR: Enhancement
  • 🧪 Relevant tests added: No
  • Focused PR: Yes, the title and description are clear and the changes are properly derived from them.
  • 🔒 Security concerns: No, the changes do not introduce any security concerns.

PR Feedback

  • 💡 General PR suggestions: The changes look good and the code seems to be well-structured. However, it would be good to add relevant tests to ensure that the new functionality works as expected. Also, consider adding more detailed descriptions to the commit messages.

@krrishdholakia
Copy link
Contributor Author

Preparing review...

@krrishdholakia
Copy link
Contributor Author

PR Analysis

  • 🎯 Main theme: Adding support for Anthropic, Cohere, Replicate, Azure
  • 📌 Type of PR: Enhancement
  • 🧪 Relevant tests added: No
  • Focused PR: Yes, the PR has a clear and coherent title and description, and all PR code diff changes are properly derived from the title and description.
  • 🔒 Security concerns: No, the PR code does not introduce any possible security concerns or issues.

PR Feedback

  • 💡 General PR suggestions: The PR looks good overall. However, it would be helpful to add relevant tests to ensure the functionality of the added models. Also, it would be better to remove the print statements used for debugging before merging the PR.

@krrishdholakia
Copy link
Contributor Author

PR Analysis

  • 🎯 Main theme: Adding support for Anthropic, Cohere, Replicate, Azure
  • 📌 Type of PR: Enhancement
  • 🧪 Relevant tests added: No
  • Focused PR: Yes, the PR has a clear and coherent title and description, and all PR code diff changes are properly derived from the title and description.
  • 🔒 Security concerns: No, the PR code does not introduce any possible security concerns or issues.

PR Feedback

  • 💡 General PR suggestions: The PR looks good overall. However, it would be helpful to add relevant tests to ensure that the added support for Anthropic, Cohere, Replicate, Azure is working as expected. Also, it would be better to avoid printing debug statements like print(resp, finish_reason) in the code. Instead, use proper logging statements.

@krrishdholakia
Copy link
Contributor Author

PR Analysis

  • 🎯 Main theme: Integration of litellm library to support multiple language models
  • 📌 Type of PR: Enhancement
  • 🧪 Relevant tests added: No
  • Focused PR: Yes, the PR is focused as it aims to integrate the litellm library to support multiple language models like Anthropic, Llama2, Cohere, etc. All the changes in the PR are aligned with this objective.
  • 🔒 Security concerns: No, the PR does not introduce any apparent security concerns. However, it's important to ensure that all the keys are stored securely and not exposed in any logs or error messages.

PR Feedback

  • 💡 General PR suggestions: The PR is well-structured and the changes are in line with the stated objective. However, it would be beneficial to add some tests to verify the new functionality. Also, it would be helpful to handle the case where the litellm library is not installed or the keys for the new models are not provided. Lastly, ensure that the error messages are descriptive enough to help in debugging in case of any issues.

@krrishdholakia
Copy link
Contributor Author

@okotek @mrT23 tested using openai, azure and anthropic.

To use a non-openai/azure providers,

  1. in .secrets.toml just set the key for the relevant provider
  2. set the model in configuration.toml

A list of available models is available via code:

import litellm 

print(model_list)

and also here:

open_ai_chat_completion_models = [
  "gpt-4",
  "gpt-4-0613",
  "gpt-4-32k",
  "gpt-4-32k-0613",
  #################
  "gpt-3.5-turbo",
  "gpt-3.5-turbo-16k",
  "gpt-3.5-turbo-0613",
  "gpt-3.5-turbo-16k-0613",
  'gpt-3.5-turbo', 
  'gpt-3.5-turbo-16k-0613',
  'gpt-3.5-turbo-16k'
]
open_ai_text_completion_models = [
    'text-davinci-003'
]

cohere_models = [
    'command-nightly',
    "command", 
    "command-light", 
    "command-medium-beta", 
    "command-xlarge-beta"
]

anthropic_models = [
  "claude-2", 
  "claude-instant-1"
]

replicate_models = [
    "replicate/"
] # placeholder, to make sure we accept any replicate model in our model_list 

We're also expecting to add support for PaLM models soon.

@krrishdholakia
Copy link
Contributor Author

@zmeir let me know if you face any issues

@okotek / @mrT23 let me know if there's any other blockers

@zmeir
Copy link
Contributor

zmeir commented Aug 6, 2023

@krrishdholakia @okotek I tested it with my deployment (Azure OpenAI + GitHub) and almost everything seems to be working as expected.
I did encounter this error while calling the /improve command, but I think it's not related to this branch specifically, as I've seen it before when testing other branches:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
File "/home/vcap/app/pr_agent/algo/git_patch_processing.py", line 206, in convert_to_hunks_with_lines_numbers
start1, size1, start2, size2 = map(int, match.groups()[:4])
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
result = await app(  # type: ignore[func-returns-value]
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/applications.py", line 290, in __call__
await super().__call__(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette_context/middleware/raw_middleware.py", line 92, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 241, in app
raw_response = await run_endpoint_function(
File "/home/vcap/deps/0/python/lib/python3.10/site-packages/fastapi/routing.py", line 167, in run_endpoint_function
return await dependant.call(**values)
File "/home/vcap/app/pr_agent/servers/github_app.py", line 37, in handle_github_webhooks
response = await handle_request(body, event=request.headers.get("X-GitHub-Event", None))
File "/home/vcap/app/pr_agent/servers/github_app.py", line 110, in handle_request
await agent.handle_request(api_url, comment_body)
File "/home/vcap/app/pr_agent/agent/pr_agent.py", line 72, in handle_request
await command2class[action](pr_url, args=args).run()
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 50, in run
await retry_with_fallback_models(self._prepare_prediction)
File "/home/vcap/app/pr_agent/algo/pr_processing.py", line 222, in retry_with_fallback_models
return await f(model)
File "/home/vcap/app/pr_agent/tools/pr_code_suggestions.py", line 62, in _prepare_prediction
self.patches_diff = get_pr_diff(self.git_provider,
File "/home/vcap/app/pr_agent/algo/pr_processing.py", line 58, in get_pr_diff
patches_extended, total_tokens = pr_generate_extended_diff(pr_languages, token_handler,
File "/home/vcap/app/pr_agent/algo/pr_processing.py", line 115, in pr_generate_extended_diff
full_extended_patch = convert_to_hunks_with_lines_numbers(extended_patch, file)
File "/home/vcap/app/pr_agent/algo/git_patch_processing.py", line 208, in convert_to_hunks_with_lines_numbers
start1, size1, size2 = map(int, match.groups()[:3])
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'

@okotek
Copy link
Contributor

okotek commented Aug 6, 2023

You're right, it's not related, we'll fix this.

Edit: fixed

@okotek okotek merged commit 00e1925 into Codium-ai:main Aug 6, 2023
@okotek
Copy link
Contributor

okotek commented Aug 6, 2023

@krrishdholakia thanks for your contributions, it works amazing

@mrT23
Copy link
Collaborator

mrT23 commented Aug 6, 2023

@krrishdholakia Thanks a lot, litellm is really a great tool

@mrT23 mrT23 mentioned this pull request Aug 7, 2023
@krrishdholakia
Copy link
Contributor Author

krrishdholakia commented Aug 7, 2023

Thank you for the merge!! @mrT23 @okotek please let me know if there's any additional feedback

@okotek
Copy link
Contributor

okotek commented Aug 7, 2023

We've got extra eyes here.. I'll Need to fix asap

@dzlab
Copy link

dzlab commented Aug 11, 2023

@krrishdholakia does this support hugging face? I see litellm supports HF but when looking at the changes made in AiHandler I don't see you looking for HF_TOKEN

@krrishdholakia
Copy link
Contributor Author

Hey @dzlab we do support Huggingface, I can make a PR to enable support for that in Codium - any specific model you're trying to use?

P.S.: we add new models + deployment platforms every day - announcements on Discord, feel free to join and see if any other providers make sense to add

cc: @okotek / @mrT23 open to thoughts on how we could make it easier to add additional integrations

@dzlab
Copy link

dzlab commented Aug 11, 2023

HF have tons of models probably you don't want to explicitly add specific ones, no?

In the AiHandler, it it can do something like litellm.init(config) would be great where config is a dictionary of those API keys (e.g. {'HF_KEY': 'xyz'}), then we move the if-then from AiHandler to this new API in litellm.

Also, in settings/configuration.toml we could have a new section where the user will set the keys needed for LLM he wants to use, for instance:

[litellm]
# put here whatever key accepted by litellm
HF_KEY=

if that sounds good, I can work on both PRs: one for litellm to add the config API, then one for pr-agent later

@krrishdholakia
Copy link
Contributor Author

Agreed - i guess i was asking since different models have different output styles see: BerriAI/litellm#87

We're slowly adding coverage for consistent output formats across models (doesn't block usage) - so if you had a specific one in mind, we can prioritize that

yochail pushed a commit to yochail/pr-agent that referenced this pull request Feb 11, 2024
adding support for Anthropic, Cohere, Replicate, Azure
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants