-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated the documents and codes on how to use Litellm to create a unified interface for both ChatGPT and Ollama #10
Conversation
…ied interface between ollama and chatgpt
please keep the |
Thanks for reminding me, I have reverted back to the previous commit |
One final nitpick from me and it's good to go. @nqngo wanna give a gloss over? |
@phattantran1997 Please have a look at [llm_assistant/ollama/README.md] to have a better understanding on how the Litellm proxy server work. |
Initially, I considered using litellm to replace the OpenAI SDK for interacting with both ChatGPT and Ollama. However, as suggested by @phattantran1997 , we can improve upon this approach.
Let’s create a proxy server that wraps around both ChatGPT and Ollama. By doing this, we can reuse the OpenAI SDK to interact with this proxy server instead of the base OpenAI server. This approach is better because it allows us to integrate our Ollama with existing OpenAI applications without requiring any code changes
As about Docker, we need to setup the proxy server inside the Ollama container, and interact with this proxy server instead. @samhwang @nqngo