You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why on earth would this model be recommended over using one of the more common 7B or higher models such as Qwen2 or Llama3? Or for that matter, the Microsoft Phi models, the smallest of which was used to create Moondream? I've run Phi Mini and Phi Medium on my machine easily.
If you're going to enable running entirely locally using Ollama, might as well let it use any of the main LLMs Ollama supports.
The text was updated successfully, but these errors were encountered:
The README says use Moondream for local Ollama use? "Install moondream if you want to use the incognito mode"
Moondream is a very small 1.6B parameter LLM that is oriented around vision tasks which, according to its Web site, has limitations.
See here:
https://medium.com/@scholarly360/moondream2-tiny-visual-language-model-for-document-understanding-df75ab1e0a02
Moondream2: Tiny Visual Language Model For Document Understanding
Why on earth would this model be recommended over using one of the more common 7B or higher models such as Qwen2 or Llama3? Or for that matter, the Microsoft Phi models, the smallest of which was used to create Moondream? I've run Phi Mini and Phi Medium on my machine easily.
If you're going to enable running entirely locally using Ollama, might as well let it use any of the main LLMs Ollama supports.
The text was updated successfully, but these errors were encountered: