Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama for Local Models #36

Closed
mateokladaric opened this issue Jun 9, 2024 · 5 comments
Closed

Ollama for Local Models #36

mateokladaric opened this issue Jun 9, 2024 · 5 comments

Comments

@mateokladaric
Copy link

Wanted to know if it's any hastle to add compatiblity to Ollama so the model can be run locally.

Haven't tried, just wondering if anyone has.

@JusticeRage
Copy link
Owner

Hi! I could definitely look into this. Is this the project you're talking about?

@mateokladaric
Copy link
Author

mateokladaric commented Jun 9, 2024 via email

@mateokladaric
Copy link
Author

Most of these projects have their own hosting on localhost, like LM Studio has as well, you could pick the port as well. (not sure if you can with ollama) but they both have like local api endpoints.

Perhaps if I use the OpenAI Proxy you set as a setting to the localhost thing it could function without any alterations.

JusticeRage added a commit that referenced this issue Sep 17, 2024
Major refactoring to support dynamic construction of the UI menus. This was necessary to support arbitrary model combinations installed via Ollama.
Updated translations.
@JusticeRage
Copy link
Owner

There we are! Please let me know if this works for you!

@mateokladaric
Copy link
Author

Thank you!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants