You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Found a similar issue but I haven't set any CXX env variable while installing llama-cpp-python. I feel it's gcc version issue (currently the deployment system is on gcc v10 but works on my PC which has gcc v11)
Python 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import llama_cpp
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/llama_cpp.py", line 75, in _load_shared_library
return ctypes.CDLL(str(_lib_path), **cdll_args) # type: ignore
File "/usr/lib/python3.9/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /usr/local/lib/python3.9/dist-packages/llama_cpp/lib/libllama.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/__init__.py", line 1, in <module>
from .llama_cpp import *
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/llama_cpp.py", line 88, in <module>
_lib = _load_shared_library(_lib_base_name)
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/llama_cpp.py", line 77, in _load_shared_library
raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")
RuntimeError: Failed to load shared library '/usr/local/lib/python3.9/dist-packages/llama_cpp/lib/libllama.so': /usr/local/lib/python3.9/dist-packages/llama_cpp/lib/libllama.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv
I am trying to run on CPU only device (plus a low end device) and this the installation command which I used:
Was able to fix this error for version 0.2.78. I was trying run llama cpp on a VM. A similar issue was faced by few when they were trying to build llama cpp (without binding) from source. While installing llama-cpp-python, I had to disable llama native.
Found a similar issue but I haven't set any
CXX
env variable while installing llama-cpp-python. I feel it's gcc version issue (currently the deployment system is on gcc v10 but works on my PC which has gcc v11)OS: Debian
llama-cpp-python Version:
0.2.87
python version:
3.9.2
gcc/g++ version:
10.2.1
Here is the log when I try to import lllama_cpp:
I am trying to run on CPU only device (plus a low end device) and this the installation command which I used:
Also it would be helpful to know if I am installing the package for CPU only device (as I need to avoid excess use of disk space)
The text was updated successfully, but these errors were encountered: