Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problems installing on scientific linux 7.3 cluster #1630

Closed
bozo32 opened this issue May 28, 2023 · 1 comment
Closed

problems installing on scientific linux 7.3 cluster #1630

bozo32 opened this issue May 28, 2023 · 1 comment

Comments

@bozo32
Copy link

bozo32 commented May 28, 2023

scientific linux (red-hat enterprise) 7.3
cc 4.8.5 (this is the system default)
installed locally...
gcc 10.3
glibc.i686 2.17-322.el7_9 @sl-security
glibc.x86_64 2.17-322.el7_9 @sl-security

Expected Behavior

I was trying to install oobagooga and it choked on llama.cpp. I then tried to install llama.cpp directly. in both cases I got the same error. The effect is that all GGML models do nothing.

Current Behavior

Building wheels for collected packages: llama-cpp-python, transformers, accelerate
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [99 lines of output]

Environment and Context

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
Stepping: 4
CPU MHz: 1240.594
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 22528K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities

python 3.10.9
gcc (GCC) 10.3.0 (installed locally cause I don't have root, cc is 4.8 or so but gcc is in ~/.bashrc)
g++ (GCC) 10.3.0
GNU Make 4.3

pip list | egrep "torch|numpy|sentencepiece"
numpy 1.24.3
sentencepiece 0.1.99
torch 2.0.1
torchaudio 2.0.2+rocm5.4.2
torchvision 0.15.2+rocm5.4.2

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

try installing Oobabooga on a scientific linux cluster.

Failure Logs

Building wheels for collected packages: llama-cpp-python, transformers, accelerate
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [99 lines of output]

  --------------------------------------------------------------------------------
  -- Trying 'Ninja' generator
  --------------------------------
  ---------------------------
  ----------------------
  -----------------
  ------------
  -------
  --
  Not searching for unused variables given on the command line.
  -- The C compiler identification is GNU 4.8.5
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- The CXX compiler identification is GNU 4.8.5
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (0.5s)
  -- Generating done (nans)
  -- Build files have been written to: /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/_cmake_test_compile/build
  --
  -------
  ------------
  -----------------
  ----------------------
  ---------------------------
  --------------------------------
  -- Trying 'Ninja' generator - success
  --------------------------------------------------------------------------------

  Configuring Project
    Working directory:
      /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/_skbuild/linux-x86_64-3.10/cmake-build
    Command:
      /tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91 -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/_skbuild/linux-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.9 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/home/WUR/tamas002/miniconda3/envs/textgen/bin/python -DPYTHON_INCLUDE_DIR:PATH=/home/WUR/tamas002/miniconda3/envs/textgen/include/python3.10 -DPYTHON_LIBRARY:PATH=/home/WUR/tamas002/miniconda3/envs/textgen/lib/libpython3.10.so -DPython_EXECUTABLE:PATH=/home/WUR/tamas002/miniconda3/envs/textgen/bin/python -DPython_ROOT_DIR:PATH=/home/WUR/tamas002/miniconda3/envs/textgen -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/home/WUR/tamas002/miniconda3/envs/textgen/include/python3.10 -DPython3_EXECUTABLE:PATH=/home/WUR/tamas002/miniconda3/envs/textgen/bin/python -DPython3_ROOT_DIR:PATH=/home/WUR/tamas002/miniconda3/envs/textgen -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/home/WUR/tamas002/miniconda3/envs/textgen/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja -DCMAKE_BUILD_TYPE:STRING=Release

  Not searching for unused variables given on the command line.
  -- The C compiler identification is GNU 4.8.5
  -- The CXX compiler identification is GNU 4.8.5
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (0.4s)
  -- Generating done (nans)
  -- Build files have been written to: /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/_skbuild/linux-x86_64-3.10/cmake-build
  [1/2] Generating /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/vendor/llama.cpp/libllama.so
  FAILED: /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/vendor/llama.cpp/libllama.so
  cd /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/vendor/llama.cpp && make libllama.so
  I llama.cpp build info:
  I UNAME_S:  Linux
  I UNAME_P:  x86_64
  I UNAME_M:  x86_64
  I CFLAGS:   -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native
  I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function-Wno-multichar -pthread -march=native -mtune=native
  I LDFLAGS:
  I CC:       cc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
  I CXX:      g++ (GCC) 10.3.0

  g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -c llama.cpp -o llama.o
  cc  -I.              -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native   -c ggml.c -o ggml.o
  ggml.c:76:23: fatal error: stdatomic.h: No such file or directory
   #include <stdatomic.h>
                         ^
  compilation terminated.
  make: *** [Makefile:190: ggml.o] Error 1
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "/tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 674, in setup
      cmkr.make(make_args, install_target=cmake_install_target, env=env)
    File "/tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 697, in make
      self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env)
    File "/tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 742, in make_impl
      raise SKBuildError(msg)

  An error occurred while building with CMake.
    Command:
      /tmp/pip-build-env-vyzkpzhz/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake --build . --target install--config Release --
    Install target:
      install
    Source directory:
      /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91
    Working directory:
      /tmp/pip-install-di22ne4w/llama-cpp-python_0f964bef963448ddaaa50d08299dbb91/_skbuild/linux-x86_64-3.10/cmake-build
  Please check the install target is valid and see CMake's output for more information.

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Building wheel for transformers (pyproject.toml) ... done
Created wheel for transformers: filename=transformers-4.30.0.dev0-py3-none-any.whl size=7102790 sha256=f6b26779a389b47ec40289215a55d0f779555228f6d976370455d30e309abc4d
Stored in directory: /home/WUR/tamas002/.cache/pip/wheels/31/a7/0d/74848287c66e147f41c6ae09e116236d00775d52a84c862021
Building wheel for accelerate (pyproject.toml) ... done
Created wheel for accelerate: filename=accelerate-0.20.0.dev0-py3-none-any.whl size=225769 sha256=de28660322dc023156ffd31c52bbe2f6122346ea213360d581c379675726c341
Stored in directory: /home/WUR/tamas002/.cache/pip/wheels/1e/61/eb/5c95b515e16875f1653f1b8bc1fb7c9b35c6eee91cd97b829a
Successfully built transformers accelerate
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

@bozo32
Copy link
Author

bozo32 commented May 31, 2023

solved it.
had to install gcc and libraries locally, then I had to insert a post to map cc to the locally installed copy of gcc. Now it works...but WOW it is slow on the cluster where it has no right to be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant