-
Notifications
You must be signed in to change notification settings - Fork 11.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama.exe doesn't handle relative file paths in Windows correctly #46
Comments
DId you follow the instructions in the README.md to download, convert and quantize the model? The model is not included in the repo. |
I tried everything .. I did not see a separate instruction for Windows (via CMake) =( |
It is telling you it find the model in |
I don't use powershell, and I don't know what ./Release/llama.exe is yellow (I assume that means it exists?) |
Well powershell supports forward slashes just fine, but in windows the path argument to llama.exe is passed verbatim, i.e. its up to llama.exe to handle parsing the relative file path correctly. |
Reopened and corrected the issue title. |
Not sure if related. But the ggml-model-q4_0.bin I am getting is only 296kb There is no error.
|
You should check your model file, it's too small. I get this error because i wrong model_name spelling... |
Check the downloaded files via checksums in SHA256 file. |
Please include the
ggml-model-q4_0.bin
model to actually run the code:My pre-signed URL to download the model weights was broken.
The text was updated successfully, but these errors were encountered: