-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bounty] make embedded AI work on windows #682
Comments
💎 $50 bounty • Screenpi.peSteps to solve:
Thank you for contributing to mediar-ai/screenpipe! |
i think this is going to fix this issue! |
i'm able to run this but its searching for ps: this is awesome SYSTRAN/faster-whisper#535 (comment) |
how do you ran this ollama? here's the thing:
we have to delete old cuda libs because tauri crash if the build has too many files, so we only keep cuda 12 libs do you have a nvidia gpu? which? maybe only support cuda 11 |
by adding this line
then from app ui started it
yes, i tested in rtx 2050 which old, but its cuda capable |
just tested with full lib folder without deleting anything, and it worked like a charm, there is not |
since the problem with build, so we can't keep all the ollama lib files, in that case users must have to install cuda toolkit, (i've tested this) which can be rough for normal windows users also cuda toolkit takes too much space. maybe we could use something like this tauri-apps/tauri#7372 (comment) this way embedded ai will not depend on cuda toolkit and all the requirement of |
hmm yeah maybe something likle the ffmpeg-sidecar we use that download ollama at runtime and install it? |
this should work, working on it!! |
@tribhuwan-kumar btw the best would be that it's in the screenpipe core code under the llm feature flag we had done this with Candle lib but in the end used ollama because there was too much stuff to do so CLI users could also have the LLM thing without the app hope it's clear enough |
yes, that'd be better. all kind of users can use the embeded ai |
sorry for the late, i had some exams, i'll open a pr soon! |
screenpipe/screenpipe-app-tauri/src-tauri/src/llm_sidecar.rs
Line 62 in 7922064
ollama on windows needs
OLLAMA_ORIGINS=*
because its CORS blocks Tauri network protocolfor some reason the thing i did does not work (adding the env in the command)
/bounty 50
definition of done:
setx
or idk)also it would be good in general to make sure the
OLLAMA_ORIGINS=*
is sat regardless if the user use embedded AI, but do this code in app code (not CLI)The text was updated successfully, but these errors were encountered: