Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

talk-llama : fix n_gpu_layers usage again #1442

Merged
merged 1 commit into from
Nov 7, 2023

Conversation

jhen0409
Copy link
Contributor

@jhen0409 jhen0409 commented Nov 7, 2023

Context: #https://github.com/ggerganov/whisper.cpp/pull/1441#issuecomment-1797160197

@bobqianic I confirmed it works:

➜  whisper.cpp git:(fix-talk-llama-build-2) make talk-llama
I whisper.cpp build info: 
I UNAME_S:  Darwin
I UNAME_P:  arm
I UNAME_M:  arm64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_ACCELERATE -DGGML_USE_METAL
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL
I LDFLAGS:   -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
I CC:       Apple clang version 15.0.0 (clang-1500.0.40.1)
I CXX:      Apple clang version 15.0.0 (clang-1500.0.40.1)

c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -pthread -DGGML_USE_METAL examples/talk-llama/talk-llama.cpp examples/talk-llama/llama.cpp examples/common.cpp examples/common-ggml.cpp examples/common-sdl.cpp ggml.o ggml-alloc.o ggml-backend.o ggml-quants.o whisper.o ggml-metal.o -o talk-llama `sdl2-config --cflags --libs`  -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
examples/talk-llama/talk-llama.cpp:401:9: warning: 'llama_eval' is deprecated: use llama_decode() instead [-Wdeprecated-declarations]
    if (llama_eval(ctx_llama, embd_inp.data(), embd_inp.size(), 0)) {
        ^
examples/talk-llama/llama.h:436:15: note: 'llama_eval' has been explicitly marked deprecated here
    LLAMA_API DEPRECATED(int llama_eval(
              ^
examples/talk-llama/llama.h:31:56: note: expanded from macro 'DEPRECATED'
#    define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
                                                       ^
examples/talk-llama/talk-llama.cpp:584:29: warning: 'llama_eval' is deprecated: use llama_decode() instead [-Wdeprecated-declarations]
                        if (llama_eval(ctx_llama, embd.data(), embd.size(), n_past)) {
                            ^
examples/talk-llama/llama.h:436:15: note: 'llama_eval' has been explicitly marked deprecated here
    LLAMA_API DEPRECATED(int llama_eval(
              ^
examples/talk-llama/llama.h:31:56: note: expanded from macro 'DEPRECATED'
#    define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
                                                       ^
2 warnings generated.

@ggerganov ggerganov merged commit 75dc800 into ggerganov:master Nov 7, 2023
35 checks passed
vonstring pushed a commit to vonstring/whisper.cpp that referenced this pull request Nov 7, 2023
felrock pushed a commit to felrock/whisper.cpp that referenced this pull request Nov 18, 2023
landtanin pushed a commit to landtanin/whisper.cpp that referenced this pull request Dec 16, 2023
iThalay pushed a commit to iThalay/whisper.cpp that referenced this pull request Sep 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants