-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama_init_from_file: failed to load model #388
Comments
Please use the issue template when opening issues so we can better understand your problem. |
(i'm fench so sorry for my bad english) Hello, i'm on ubuntu mate, distrib of linux i have python3.10. I have the same error, i just paste this in my term:
And i have error: And when i run this command i have the error "fail to load model": make: Nothing to be done for 'default'. |
This part is saying that you’ll need to find the model files yourself and put them in the |
Hello, i'll explain in french. |
When I execute this command:
make -j && ./main -m ./models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512
An error was reported:
llama_init_from_file: failed to load model
main: error: failed to load model './models/7B/ggml-model-q4_0.bin'
The text was updated successfully, but these errors were encountered: