Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support local Ollama model created from Modelfile #112

Closed
denverdino opened this issue Jan 9, 2024 · 2 comments
Closed

Support local Ollama model created from Modelfile #112

denverdino opened this issue Jan 9, 2024 · 2 comments

Comments

@denverdino
Copy link
Contributor

As a developer, I can build some local Ollama model for testing. But it cannot be used by Genai Stack directly.

Currently pull_model.Dockerfile will invoke ollama pull command to pull a model from a registry.

        (process/shell {:env {"OLLAMA_HOST" url} :out :inherit :err :inherit} (format "./bin/ollama pull %s" llm))

I think the proper way is to pull model if the model does not exist locally. The change should be simple, e.g.

        (process/shell {:env {"OLLAMA_HOST" url} :out :inherit :err :inherit} (format "bash -c './bin/ollama show %s --modelfile > /dev/null || ./bin/ollama pull %s'" llm llm))
@slimslenderslacks
Copy link
Collaborator

Cool, I think this will work for most people. It wouldn't pull new versions of the model as part of compose up but maybe that should be outside of compose up anyway.

@jexp
Copy link
Collaborator

jexp commented Jan 24, 2024

@mchiang0610 what do you think? There is also a PR for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants