-
Notifications
You must be signed in to change notification settings - Fork 11.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ggml alpaca model don't work #1661
Comments
Model file format is too old. The original model is linked in URL that you gave. It can be converted by the tools in llama.cpp already (convert.py, quantize) |
thank you! you godsend. i didn't even know origin model in my url i download origin model and convert and quantize successfully my try write down for other people
and next, first time my command was like README.md
now it's truely run But still but there is a phenomenon that sometimes the same command failed Try twice, and it's run |
i download alpaca model into local "/models/" folder through this huggingface url
https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml
and run examples/alpaca.sh
./main -m ./models/ggml-alpaca-7b-q4.bin
--color
-f ./prompts/alpaca.txt
--ctx_size 2048
-n -1
-ins -b 256
--top_k 10000
--temp 0.2
--repeat_penalty 1.1
-t 7 >> "$log_file"
program print this and it's over
main: build = 607 (ffb06a3)
main: seed = 1685605906
what should i do?
The text was updated successfully, but these errors were encountered: