Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using a remote server for ollama #23

Open
qcts33 opened this issue Dec 27, 2023 · 1 comment
Open

Using a remote server for ollama #23

qcts33 opened this issue Dec 27, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@qcts33
Copy link

qcts33 commented Dec 27, 2023

I'm using a laptop for daliy work and I have a GPU server. So, it would be nice if I can use chatd as a frontend and running the LLMs in my server with ollama.

By the way, I'm currently using a local reverse-proxy to make chatd connect to my server. It works fine, but a little complicated.

caddy reverse-proxy --from :11434 --to server:11434
@BruceMacD BruceMacD added the enhancement New feature or request label Dec 27, 2023
@ThatOneCalculator
Copy link

Related to this, please make it so that for Linux builds, you don't have to copy the Ollama binary to the source folder. It should just be able to interact with the API running either locally or remotely, as all other Ollama clients do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants