-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model feezes from time to time then restarts #1250
Comments
are you talking about the context swap? set your context size using |
I am not sure what it is as the same prompt run afterwards may or may not do the same, it seems spurious but I will give that a go. If it was context swap would that not be consistant for each prompt especially if the same? |
PS cheers for that as just edited alpaca.sh and yeah seems to of got it, I got the above from a medium article so dunno why its missing ./main -m ./models/7B/ggml-model-q4_0.bin
|
I am running Lllama.cpp on a RK3588 and its speed is actually great but I am not sure why the text output just freezes.
I can not seem to work it out as it will happen then when I try to recreate with the same prompt it will not happen and text output is constant.
It could be my fairly beat up Rock5b maybe and no criticism just wondering is it just me or do others experience the same.
Running in interactive mode with
./main -m ./models/7B/ggml-model-q4_0.bin --color -f ./prompts/alpaca.txt -ins --n_parts 1 --temp 0.8 --top_k 40 -n 5000 --repeat_penalty 1.3 --top_p 0.0 --threads 4
The text was updated successfully, but these errors were encountered: