-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Illegal instruction (core dumped) when trying to load model #839
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Your CPU is very old and doesn't support certain instructions like AVX2 (you can see that it is missing in the list of "flags" using
|
Same issue here After doing some digging, it turns out that CMAKE_ARGS are not passed to the pip install command. Still trying to figure out why |
…e in the makefile (abetlen#839)
@fgeo23 have you found out the reasons and solution? |
Hello everyone. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
To load the model
Please provide a detailed written description of what you were trying to do, and what you expected
llama-cpp-python
to do.Current Behavior
when i try to load the model with
llm = Llama(model_path="./llama.cpp/models/llama-2-7b-chat.Q5_K_M.gguf")
it response with: Illegal instruction (core dumped)
This is from my syslog:
kernel: [1728595.660950] traps: python3[213941] trap invalid opcode ip:7f4aa44a4e94 sp:7ffceec92e60 error:0 in libllama.so[7f4aa448a000+9f000]
Environment and Context
lscpu
it is a Vritual with ubuntu 22.04
$ uname -a
Linux trying-to-train-llama2 5.15.0-46-generic #49-Ubuntu SMP Thu Aug 4 18:03:25 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: