Skip to content

Failed to load shared library error: undefined symbol _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv #1688

Closed
@kausthub-kannan

Description

@kausthub-kannan

Found a similar issue but I haven't set any CXX env variable while installing llama-cpp-python. I feel it's gcc version issue (currently the deployment system is on gcc v10 but works on my PC which has gcc v11)

OS: Debian
llama-cpp-python Version: 0.2.87
python version: 3.9.2
gcc/g++ version: 10.2.1

Here is the log when I try to import lllama_cpp:

Python 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import llama_cpp
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/llama_cpp.py", line 75, in _load_shared_library
return ctypes.CDLL(str(_lib_path), **cdll_args)  # type: ignore
File "/usr/lib/python3.9/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /usr/local/lib/python3.9/dist-packages/llama_cpp/lib/libllama.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/__init__.py", line 1, in <module>
from .llama_cpp import *
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/llama_cpp.py", line 88, in <module>
_lib = _load_shared_library(_lib_base_name)
File "/usr/local/lib/python3.9/dist-packages/llama_cpp/llama_cpp.py", line 77, in _load_shared_library
raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")
RuntimeError: Failed to load shared library '/usr/local/lib/python3.9/dist-packages/llama_cpp/lib/libllama.so': /usr/local/lib/python3.9/dist-packages/llama_cpp/lib/libllama.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv

I am trying to run on CPU only device (plus a low end device) and this the installation command which I used:

pip install llama-cpp-python==0.2.87 --force-reinstall --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

Also it would be helpful to know if I am installing the package for CPU only device (as I need to avoid excess use of disk space)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions