Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model feezes from time to time then restarts #1250

Closed
StuartIanNaylor opened this issue Apr 30, 2023 · 3 comments
Closed

Model feezes from time to time then restarts #1250

StuartIanNaylor opened this issue Apr 30, 2023 · 3 comments

Comments

@StuartIanNaylor
Copy link

I am running Lllama.cpp on a RK3588 and its speed is actually great but I am not sure why the text output just freezes.

I can not seem to work it out as it will happen then when I try to recreate with the same prompt it will not happen and text output is constant.

It could be my fairly beat up Rock5b maybe and no criticism just wondering is it just me or do others experience the same.

Running in interactive mode with ./main -m ./models/7B/ggml-model-q4_0.bin --color -f ./prompts/alpaca.txt -ins --n_parts 1 --temp 0.8 --top_k 40 -n 5000 --repeat_penalty 1.3 --top_p 0.0 --threads 4

@StuartIanNaylor StuartIanNaylor changed the title Model feezes from time to time the restarts Model feezes from time to time then restarts Apr 30, 2023
@Alumniminium
Copy link

are you talking about the context swap? set your context size using -c 128 and see if it freezes more frequently.

@StuartIanNaylor
Copy link
Author

I am not sure what it is as the same prompt run afterwards may or may not do the same, it seems spurious but I will give that a go.

If it was context swap would that not be consistant for each prompt especially if the same?

@StuartIanNaylor
Copy link
Author

StuartIanNaylor commented Apr 30, 2023

PS cheers for that as just edited alpaca.sh and yeah seems to of got it, I got the above from a medium article so dunno why its missing

./main -m ./models/7B/ggml-model-q4_0.bin
--color
-f ./prompts/alpaca.txt
--ctx_size 2048
-n -1
-ins -b 256
--top_k 10000
--temp 0.2
--repeat_penalty 1.1
-t 4 \

 *./examples/alpaca.sh: line 20: 264729 Segmentation fault      ./main -m ./models/7B/ggml-model-q4_0.bin --color -f ./prompts/alpaca.txt --ctx_size 128 -n -1 -ins -b 256 --top_k 10000 --temp 0.2 --repeat_penalty 1.1 -t 4
rock@rock-5b:~/llama.cpp$  *./examples/alpaca.sh: line 20: 264729 Segmentation fault      ./main -m ./models/7B/ggml-model-q4_0.bin --color -f ./prompts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants