Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama support? #1001

Open
txhno opened this issue Aug 26, 2024 · 1 comment
Open

ollama support? #1001

txhno opened this issue Aug 26, 2024 · 1 comment

Comments

@txhno
Copy link

txhno commented Aug 26, 2024

Is your feature request related to a problem? Please describe.
would want to reuse the models that I already have downloaded on ollama

Describe the solution you'd like
being able to use models.ollama(model_name_or_path)

Describe alternatives you've considered
llama cpp works as of now, but ollama would just make process of using this app a lot more user friendly having downloads automated and models stored centrally

Additional context
none

@guidance-ai guidance-ai deleted a comment from txhno Aug 26, 2024
@Harsha-Nori
Copy link
Collaborator

Harsha-Nori commented Aug 26, 2024

@txhno Sorry about that random weird comment...removed your reply too since it had a quote of the link in it, hope that's OK!

On topic -- exploring Ollama support is a really good idea. My understanding is that they just use llama.cpp under the hood and manage GGUF files, right? If we can figure out where the GGUFs are hosted on local file systems, then we can use our llama.cpp infrastructure to make it easy to load ollama models in guidance.

We can put this on our backlog to investigate, but if you (or anyone reading this!) have some knowledge about how Ollama works, I'd be happy to tag-team and support a PR here.

@riedgar-ms @nking-1 for awareness

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@Harsha-Nori @txhno and others