You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I successfully compiled LlamaGPTJ and downloaded the 7b and 13b LLaMA models. It was not clear where to place them but the only 'models' folder was at \gpt4all-backend\llama.cpp\models. I placed them here and am getting the following message, followed by continuously running loading dots.
Your computer supports AVX2
LlamaGPTJ-chat: loading .\models\ggml-vicuna-13b-1.1-q4_2.bin
.........
Is there something else that needs to be done to point the program to the model?
Thanks
The text was updated successfully, but these errors were encountered:
You can put the model bin files anywhere on your computer. Just use -m flag and then path to the bin file.
So like: ./chat -m "/Users/kuvaus/mynewfolderformodels/ggml-vicuna-13b-1.1-q4_2.bin"
But without the -m it looks for the models folder in the same directory as the chat executable. So you need to make a new models folder in the same directory where you have the executable. (After you compiled the chat, you can move it anywhere, just have the models folder next to it). You were right to ask the question, by default it won't find the models from the \gpt4all-backend\llama.cpp\models but just from .\models.
One more thing: it can take a while to load the model so you'll be seeing the running dots loop until the model is fully ready.
I successfully compiled LlamaGPTJ and downloaded the 7b and 13b LLaMA models. It was not clear where to place them but the only 'models' folder was at \gpt4all-backend\llama.cpp\models. I placed them here and am getting the following message, followed by continuously running loading dots.
Is there something else that needs to be done to point the program to the model?
Thanks
The text was updated successfully, but these errors were encountered: