-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem when launching privateGPT #411
Comments
A model is not downloading automatically. by default it tries to load models/ggml-gpt4all-j-v1.3-groovy.bin , which is missing To fix:
You can edit path and model name by editing .env |
@PierreVannier , with the latest langchain and gpt4all, it downloads the file automatically like hugginface would. It's probably easier to use that way. You can try at: https://github.com/h2oai/h2ogpt . See updated instructions at: https://github.com/h2oai/h2ogpt/blob/main/FAQ.md#CPU PrivateGPT can upgrade as well following h2oGPT code. |
@lanalancia @pseudotensor I want to be able to use a french model, that's why I use LLama CPP and Vigogne. |
I have the same error and problem. Can't figure out how to solve it, using the alpaca-lora-7B from Huggingface. |
Hi, the latest version of llama-cpp-python is 0.1.55. Do you have this version installed? If not: Then, you need to use a vigogne model using the latest ggml version: this one for example.
For me, it is working with |
Thanks @Guillaume-Fgt , It works with your workaround ! |
Hi
I've done all the necessary steps to have a llama.cpp (vigogne) model and correctly ingested documents (pdf, docx, ppt) but when I launch privateGPT I have this error :
I installed the latest version of llama.cpp, vigogne and bin running on a mac book pro M1.
Any clue ?
The text was updated successfully, but these errors were encountered: