-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
epic: Improving Ollama integration #2998
Comments
Yes please! This is my biggest blocker with Jan. I don't want multiple redundant model file location. I'd like my ollama models to be easily used |
I second this. I looked at the docs about "Ollama integration", but all that does is set up the server endpoint. You can't select an Ollama model already downloaded where Ollama stores its models and I don't think you can upload the model. On my openSUSE Tumbleweed system, Ollama stores its models in /var/lib/ollama/.ollama/models/ rather than the default Ollama location, and the Import file selection dialog can't even see the directories below the /var/lib/ollama directory. |
You shouldn't have to import a model though, if you look at how other tools do it, they provide a list of available models from the api |
That's right, a call to the model listing endpoint and then allowing one to be selected for use is what we're talking about, at least to start. I think 0xSage is a little mixed up about what we're all asking for, maybe? This is holding back anyone that has Ollama running somewhere with modelfiles already configured that they want to use. |
Yeah! That’d be fantastic! |
Yup, that's the idea. |
yes please |
|
" I was able to symlink ollama models to Jan using https://github.com/sammcj/gollama" I use gollama to link to LMStudio. How did you use it to link to Jan? Did you put the Jan directory into the LMStudio files path in Gollama? |
@richardstevenhack I probably could update Gollama to add Jan linking support but I think it would make more sense for Jan to just support Ollama as LLM provider though, that way you'd get all the nice Ollama API features and wouldn't have to load models in multiple places. |
I agree. If everyone rallied around Ollama as the main AI server for PCs, and other programs concentrated on the UI and additional features on top, things would be easier. Until then, would be nice to have the ability to link Jan to Ollama. |
Would also love to see Ollama models from a dropdown in Jan via a model provider! Running models on both Ollama and Jan simultaneously can bring most computers to their knees! |
Should have included the steps in the original comment:
Feedback:
|
I followed the above advice and I now see that Jan has added the option when importing to: |
Very nice. Just an observation: it worked for me when I selected the |
+1 |
cc @s-celles @sammcj @khromov @mrtysn @ShravanSunder @abdessalaam @richardstevenhack |
Problem
Integrating Ollama with Jan using the single OpenAI endpoint feels challenging. It’s also a hassle to ‘download’ the model.
Success Criteria
Additional context
Related Reddit comment to be updated: https://www.reddit.com/r/LocalLLaMA/comments/1d8n9wr/comment/l77ifd1/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
The text was updated successfully, but these errors were encountered: