-
Notifications
You must be signed in to change notification settings - Fork 1.1k
main.py not running on M1 Mac due to llama_context_default_params symbol not found #52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Noting that I did force terminate the process and terminal window the last time this was working, immediately opened a new one and started having these issues |
I blew everything up and re-cloned and installed the repo from scratch and it resolved! But... the response speed is even slower than before?? like 10x slower than I reported here a couple days ago (which was already slow) #49 (comment)
120ms -> 1200ms per token :( |
On my m1 MacBook I had very slow generation speeds with llama.cpp until I set the --mlock flag to force it to keep everything in memory. Did you try that? |
Things were working fine until i closed my terminal window and opened a new one and starting seeing issues (don't remember the error). I went ahead and did a quick update (via "development") steps in readme and started getting this issue when running
python3 -m llama_cpp.server
I've gone in and done make in llama.cpp again, run the develop script again and again to no avail. deleted the .so file and rebuilt it multiple times, made sure the MODEL variable is set properly too :/ what am i doing wrong
The text was updated successfully, but these errors were encountered: