-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server unable to load model #3744
Labels
Comments
Have you done these steps?
|
just to confirm can you please give me a reference to obtain the original LLaMA |
For example: Llama 1 7B, Llama 2 7B. Depending on what you want to do with it, a better option may be to download already converted model: https://huggingface.co/TheBloke?search_models=gguf |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
examples server should start
Current Behavior
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model-f16.gguf'
{"timestamp":1698069462,"level":"ERROR","function":"load_model","line":558,"message":"unable to load model","model":"models/7B/ggml-model-f16.gguf"}
Loaded 'C:\Windows\SysWOW64\kernel.appcore.dll'.
Loaded 'C:\Windows\SysWOW64\msvcrt.dll'.
The program '[6600] server.exe' has exited with code 1 (0x1).
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
physical
$ lscpu
Operating System, e.g. for Linux:
windows
$ uname -a
SDK version, e.g. for Linux:
Failure Information (for bugs)
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model-f16.gguf'
{"timestamp":1698069462,"level":"ERROR","function":"load_model","line":558,"message":"unable to load model","model":"models/7B/ggml-model-f16.gguf"}
Loaded 'C:\Windows\SysWOW64\kernel.appcore.dll'.
Loaded 'C:\Windows\SysWOW64\msvcrt.dll'.
The program '[6600] server.exe' has exited with code 1 (0x1).
The text was updated successfully, but these errors were encountered: