Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama #43

Open
nsosio opened this issue Nov 15, 2023 · 3 comments
Open

Ollama #43

nsosio opened this issue Nov 15, 2023 · 3 comments

Comments

@nsosio
Copy link
Collaborator

nsosio commented Nov 15, 2023

see here

@nsosio nsosio added the enhancement New feature or request label Nov 15, 2023
@nsosio nsosio added low priority and removed enhancement New feature or request labels Dec 11, 2023
@nsosio
Copy link
Collaborator Author

nsosio commented Apr 5, 2024

@Anindyadeep can you check if they support llama2/mistral? Otherwise let's close the issue

@Anindyadeep
Copy link
Member

@Anindyadeep can you check if they support llama2/mistral? Otherwise let's close the issue

Yes we need to put this in medium priority, since it is an engine

@Interpause
Copy link

isn't ollama practically a wrapper around llama.cpp though?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants