Skip to content

Core dumped on trying to import from llama_cpp module when built with CUBLAS=on #412

Closed
@m-from-space

Description

@m-from-space
  1. I installed llama-cpp-python successfully with CUBLAS on my system with the following command:

CUDACXX=/usr/local/cuda/bin/nvcc CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=native" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir

  1. When trying to start using it, a severe crash is happening on importing the module:
$ python
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from llama_cpp import Llama
Illegal instruction (core dumped)
  1. This also effects using text-generation-webui with CUBLAS on, so I cannot load any llama.cpp model with it.

System: Ubuntu 20.04, RTX 3060 12 GB, 64 GB RAM, CUDA 12.1.105

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions