You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when I clone the project into my centos, I enter the llama.cpp and run the command of "make". It works but didn't make successfully.
Expected Behavior
the info after the command "make" was running:
I llama.cpp build info:
I UNAME_S: Linux
I UNAME_P: x86_64
I UNAME_M: x86_64
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mavx -mavx2 -mfma -mf16c -msse3 -mavx512f -mavx512bw -mavx512dq -mavx512vl -mavx512cd
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:
I CC: cc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
I CXX: g++ (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
cc -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mavx -mavx2 -mfma -mf16c -msse3 -mavx512f -mavx512bw -mavx512dq -mavx512vl -mavx512cd -c ggml.c -o ggml.o
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c llama.cpp -o llama.o
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c examples/common.cpp -o common.o
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread examples/main/main.cpp ggml.o llama.o common.o -o main
==== Run ./main -h for help. ====
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread examples/quantize/quantize.cpp ggml.o llama.o -o quantize
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread examples/perplexity/perplexity.cpp ggml.o llama.o common.o -o perplexity
g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread examples/embedding/embedding.cpp ggml.o llama.o common.o -o embedding
After this I run ls ./models,it only shows info: ggml-vocab.bin
I can't find the result of 65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
what can I do?
I would appreciate it very much if you could solve my problem。
The text was updated successfully, but these errors were encountered:
Prerequisites
when I clone the project into my centos, I enter the llama.cpp and run the command of "make". It works but didn't make successfully.
Expected Behavior
the info after the command "make" was running:
After this I run
ls ./models
,it only shows info:ggml-vocab.bin
I can't find the result of
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
what can I do?
I would appreciate it very much if you could solve my problem。
The text was updated successfully, but these errors were encountered: