Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote Ollama Server #592

Open
revoun opened this issue Jul 10, 2024 · 5 comments
Open

Remote Ollama Server #592

revoun opened this issue Jul 10, 2024 · 5 comments

Comments

@revoun
Copy link

revoun commented Jul 10, 2024

so has anyone got shell_gpt to work with a remote ollama Server ?
I have a ollama server on a remote machine in my local network, and try to use shell_gpt on my local pc.
But sgpt still trying to connect to localhost ... How can i connect sgpt to a specific Server ?

@will-wright-eng
Copy link
Contributor

check out ollama's docs for setting up a server

@8ar10der
Copy link

Hi @will-wright-eng

I can not find the solution in the link you mentioned.
I have the same scenario. I deployed my ollama instance in my homelab (e.g. http://192.168.1.2:11434). How can I let the sgpt to access with the address?

@spennell
Copy link

spennell commented Aug 2, 2024

Follow the instructions listed here to install the LiteLLM piece for SGPT - https://github.com/TheR1D/shell_gpt/wiki/Ollama

When editing your ~/.config/shell_gpt/.sgptrc config update the following settings like below, just make sure you replace my URL with yours.

OPENAI_USE_FUNCTIONS=false
API_BASE_URL=http://ai:11434
USE_LITELLM=true
OPENAI_API_KEY=testkey

@8ar10der
Copy link

8ar10der commented Aug 6, 2024

Thanks @spennell . I found I just forget to add the "ollama/" prefix in the DEFAULT_MODEL value. Now all work well. So add one more line in @spennell answer:

DEFAULT_MODEL=ollama/deepseek-coder-v2:latest # Change the model name by you want
OPENAI_USE_FUNCTIONS=false
API_BASE_URL=http://ai:11434
USE_LITELLM=true
OPENAI_API_KEY=testkey

@kov
Copy link

kov commented Oct 20, 2024

Adding the above to the guide would be really useful. Also, litellm can now provide a bearer token for the ollama_chat provider (they still need to add it to the ollama one), so it would be nice to allow passing an api key as well. I got it to work here by just removing the .pop():

diff --git a/sgpt/handlers/handler.py b/sgpt/handlers/handler.py
index a17d802..c630213 100644
--- a/sgpt/handlers/handler.py
+++ b/sgpt/handlers/handler.py
@@ -23,7 +23,6 @@ if use_litellm:
 
     completion = litellm.completion
     litellm.suppress_debug_info = True
-    additional_kwargs.pop("api_key")
 else:
     from openai import OpenAI
 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants