Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Why Moondream? #46

Open
richardstevenhack opened this issue Jul 4, 2024 · 2 comments
Open

Question: Why Moondream? #46

richardstevenhack opened this issue Jul 4, 2024 · 2 comments

Comments

@richardstevenhack
Copy link

The README says use Moondream for local Ollama use? "Install moondream if you want to use the incognito mode"

Moondream is a very small 1.6B parameter LLM that is oriented around vision tasks which, according to its Web site, has limitations.

See here:
https://medium.com/@scholarly360/moondream2-tiny-visual-language-model-for-document-understanding-df75ab1e0a02
Moondream2: Tiny Visual Language Model For Document Understanding

Why on earth would this model be recommended over using one of the more common 7B or higher models such as Qwen2 or Llama3? Or for that matter, the Microsoft Phi models, the smallest of which was used to create Moondream? I've run Phi Mini and Phi Medium on my machine easily.

If you're going to enable running entirely locally using Ollama, might as well let it use any of the main LLMs Ollama supports.

@nemoche
Copy link

nemoche commented Dec 31, 2024

Totally agree, good point, I am having trouble to use my local Ollama in this too, Ollama runs well in flowise now...any help?

@areibman
Copy link
Collaborator

Totally agree, good point, I am having trouble to use my local Ollama in this too, Ollama runs well in flowise now...any help?

We might swap it out for Gemini but that wouldn’t be offline ;)

Would having a toggle for more powerful, online models be of interest?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants