Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server unable to load model #3744

Closed
jonty-esterhuizen opened this issue Oct 23, 2023 · 4 comments
Closed

server unable to load model #3744

jonty-esterhuizen opened this issue Oct 23, 2023 · 4 comments
Labels

Comments

@jonty-esterhuizen
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [ x ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ x ] I carefully followed the README.md.
  • [ x ] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ x ] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

examples server should start

Current Behavior

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model-f16.gguf'
{"timestamp":1698069462,"level":"ERROR","function":"load_model","line":558,"message":"unable to load model","model":"models/7B/ggml-model-f16.gguf"}
Loaded 'C:\Windows\SysWOW64\kernel.appcore.dll'.
Loaded 'C:\Windows\SysWOW64\msvcrt.dll'.
The program '[6600] server.exe' has exited with code 1 (0x1).

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • Physical (or virtual) hardware you are using, e.g. for Linux:
    physical
    image
    image

$ lscpu

  • Operating System, e.g. for Linux:
    windows
    $ uname -a

  • SDK version, e.g. for Linux:

$ python3 Python 3.10.11
$ make 3.28 

Failure Information (for bugs)

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model-f16.gguf'
{"timestamp":1698069462,"level":"ERROR","function":"load_model","line":558,"message":"unable to load model","model":"models/7B/ggml-model-f16.gguf"}
Loaded 'C:\Windows\SysWOW64\kernel.appcore.dll'.
Loaded 'C:\Windows\SysWOW64\msvcrt.dll'.
The program '[6600] server.exe' has exited with code 1 (0x1).

@shibe2
Copy link
Contributor

shibe2 commented Oct 23, 2023

Have you done these steps?

  • obtain the original LLaMA model weights and place them in ./models
  • convert the 7B model to ggml FP16 format

@jonty-esterhuizen
Copy link
Author

just to confirm can you please give me a reference to obtain the original LLaMA

@shibe2
Copy link
Contributor

shibe2 commented Oct 23, 2023

For example: Llama 1 7B, Llama 2 7B.

Depending on what you want to do with it, a better option may be to download already converted model: https://huggingface.co/TheBloke?search_models=gguf

@cebtenzzre cebtenzzre changed the title [User] Insert summary of your issue or enhancement.. server unable to load model Oct 23, 2023
@github-actions github-actions bot added the stale label Mar 19, 2024
Copy link
Contributor

github-actions bot commented Apr 2, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants