Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[User] Insert summary of your issue or enhancement.. #1471

Closed
adityachallapally opened this issue May 15, 2023 · 3 comments
Closed

[User] Insert summary of your issue or enhancement.. #1471

adityachallapally opened this issue May 15, 2023 · 3 comments
Labels

Comments

@adityachallapally
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [Y ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ Y] I carefully followed the README.md.
  • [ Y] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [Y ] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

I expect it to run but it keeps on not running after I followed the instructions.

Current Behavior

Keeps on providing errors to ggml functions:

I llama.cpp build info:
I UNAME_S: Linux
I UNAME_P: unknown
I UNAME_M: x86_64
I CFLAGS: -I. -O3 -std=c11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native
I CXXFLAGS: -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native
I LDFLAGS:
I CC: cc (GCC) 13.1.1 20230429
I CXX: g++ (GCC) 13.1.1 20230429

g++ -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native examples/main/main.cpp ggml.o llama.o common.o -o main
/usr/bin/ld: llama.o: in function llama_free': llama.cpp:(.text+0x28e1): undefined reference to ggml_cuda_host_free'
/usr/bin/ld: llama.cpp:(.text+0x2924): undefined reference to ggml_cuda_host_free' /usr/bin/ld: llama.cpp:(.text+0x2aac): undefined reference to ggml_cuda_host_free'
/usr/bin/ld: llama.cpp:(.text+0x2adb): undefined reference to ggml_cuda_host_free' /usr/bin/ld: llama.o: in function llama_model_load_internal(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, llama_context&, int, int, ggml_type, bool, bool, bool, void ()(float, void), void*)':
llama.cpp:(.text+0x9020): undefined reference to ggml_cuda_host_free' /usr/bin/ld: llama.cpp:(.text+0x9033): undefined reference to ggml_cuda_host_malloc'
/usr/bin/ld: llama.cpp:(.text+0xa6ff): undefined reference to ggml_cuda_transform_tensor' /usr/bin/ld: llama.cpp:(.text+0xa714): undefined reference to ggml_cuda_transform_tensor'
/usr/bin/ld: llama.cpp:(.text+0xa729): undefined reference to ggml_cuda_transform_tensor' /usr/bin/ld: llama.cpp:(.text+0xa73e): undefined reference to ggml_cuda_transform_tensor'
/usr/bin/ld: llama.cpp:(.text+0xa756): undefined reference to ggml_cuda_transform_tensor' /usr/bin/ld: llama.o:llama.cpp:(.text+0xa76b): more undefined references to ggml_cuda_transform_tensor' follow
/usr/bin/ld: llama.o: in function llama_init_from_file': llama.cpp:(.text+0xb968): undefined reference to ggml_cuda_host_free'
/usr/bin/ld: llama.cpp:(.text+0xb97b): undefined reference to ggml_cuda_host_malloc' /usr/bin/ld: llama.cpp:(.text+0xbb96): undefined reference to ggml_cuda_host_free'
/usr/bin/ld: llama.cpp:(.text+0xbba9): undefined reference to ggml_cuda_host_malloc' /usr/bin/ld: llama.cpp:(.text+0xbc4f): undefined reference to ggml_cuda_host_free'
/usr/bin/ld: llama.cpp:(.text+0xbc62): undefined reference to ggml_cuda_host_malloc' /usr/bin/ld: llama.cpp:(.text+0xbd0b): undefined reference to ggml_cuda_host_free'
/usr/bin/ld: llama.cpp:(.text+0xbd1e): undefined reference to ggml_cuda_host_malloc' /usr/bin/ld: ggml.o: in function ggml_compute_forward_mul_mat_q_f32':
ggml.c:(.text+0x2817): undefined reference to ggml_cuda_can_mul_mat' /usr/bin/ld: ggml.o: in function ggml_compute_forward_mul_mat_f16_f32':
ggml.c:(.text+0x602d): undefined reference to ggml_cuda_can_mul_mat' /usr/bin/ld: ggml.o: in function ggml_init':
ggml.c:(.text+0x12068): undefined reference to ggml_init_cublas' /usr/bin/ld: ggml.o: in function ggml_compute_forward':
ggml.c:(.text+0x1452b): undefined reference to ggml_cuda_can_mul_mat' /usr/bin/ld: ggml.o: in function ggml_graph_compute':
ggml.c:(.text+0x1f317): undefined reference to ggml_cuda_can_mul_mat' /usr/bin/ld: ggml.c:(.text+0x1f947): undefined reference to ggml_cuda_mul_mat_get_wsize'
/usr/bin/ld: ggml.o: in function ggml_compute_forward_mul_mat_q_f32': ggml.c:(.text+0x2ba5): undefined reference to ggml_cuda_mul_mat'
/usr/bin/ld: ggml.o: in function ggml_compute_forward_mul_mat_f16_f32': ggml.c:(.text+0x6469): undefined reference to ggml_cuda_mul_mat'
/usr/bin/ld: ggml.o: in function ggml_compute_forward': ggml.c:(.text+0x16564): undefined reference to ggml_cuda_mul_mat'

Environment and Context

Linux on 8G GPU

@SlyEcho
Copy link
Collaborator

SlyEcho commented May 15, 2023

That means you compiled ggml.o with LLAMA_CUBLAS=1 and main without it.

Please run make clean before building different configurations, or use CMake.

@github-actions github-actions bot added the stale label Mar 25, 2024
@BlessMario
Copy link

make clean was the remedy...thanks

@github-actions github-actions bot removed the stale label Mar 31, 2024
@github-actions github-actions bot added the stale label Apr 30, 2024
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants