-
-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote Ollama Server #592
Comments
check out ollama's docs for setting up a server |
I can not find the solution in the link you mentioned. |
Follow the instructions listed here to install the LiteLLM piece for SGPT - https://github.com/TheR1D/shell_gpt/wiki/Ollama When editing your ~/.config/shell_gpt/.sgptrc config update the following settings like below, just make sure you replace my URL with yours.
|
Thanks @spennell . I found I just forget to add the "ollama/" prefix in the DEFAULT_MODEL value. Now all work well. So add one more line in @spennell answer:
|
Adding the above to the guide would be really useful. Also, litellm can now provide a bearer token for the ollama_chat provider (they still need to add it to the ollama one), so it would be nice to allow passing an api key as well. I got it to work here by just removing the .pop(): diff --git a/sgpt/handlers/handler.py b/sgpt/handlers/handler.py
index a17d802..c630213 100644
--- a/sgpt/handlers/handler.py
+++ b/sgpt/handlers/handler.py
@@ -23,7 +23,6 @@ if use_litellm:
completion = litellm.completion
litellm.suppress_debug_info = True
- additional_kwargs.pop("api_key")
else:
from openai import OpenAI
|
so has anyone got shell_gpt to work with a remote ollama Server ?
I have a ollama server on a remote machine in my local network, and try to use shell_gpt on my local pc.
But sgpt still trying to connect to localhost ... How can i connect sgpt to a specific Server ?
The text was updated successfully, but these errors were encountered: