Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there anyway to route the plugin to a privateGPT instance hosted locally on the same computer or on a local network? #130

Open
dziliak opened this issue Nov 29, 2023 · 2 comments

Comments

@dziliak
Copy link

dziliak commented Nov 29, 2023

I love the idea of using this plugin on an offline LLM instead of giving my data to the cloud. Are there any suggestions on where to look in this code and other resources to kludge something together to use privateGPT instead of openAI?

I didn't see anyone else asking this and hope this is the right spot to ask it.

@dziliak dziliak changed the title Is there anyway to route the to a privateGPT instance hosted locally on the same computer or on a local network? Is there anyway to route the plugin to a privateGPT instance hosted locally on the same computer or on a local network? Nov 29, 2023
@BradKML
Copy link

BradKML commented Jan 26, 2024

Seconded but for something like Oobabooga (or Ollama once they get on windows)... or stronger integration with GPT4All if their limits can be tolerated. Not sure about LocalChat and Local AI
Same applies to debanjandhar12/logseq-chatgpt-plugin#32

@Green-Sky
Copy link

llama.cpp's server provides a more or less openai compatible api, so making the api url configurable might be all thats needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants