You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the latest release (or master), the main binary compiles as expected and executes when the shell is being executed from within the llama.cpp directory. However, the same command being executed from a different directory, using a symbolic link to the binary, does not work and produces an error that metal cannot find the gguf-common.h file.
You will need a model already downloaded to test this with. I have tested against several models, this is just the command for 1 example.
clone the repo (if you don't have it already)
cd into the repo root and run make on Linux/MacOS to build the latest binary.
From the terminal within the llama.cpp directory root where the main binary is, run a model. Example command: ./main -m ~/models/mistral-7b-instruct-v0.2.Q5_K_S.gguf --prompt "This is a test, tell me a nerdy coding joke". This test should pass and produce output as expected.
Now, make a symbolic link into /usr/local/bin, I'm going to use llamacpp as the symbol name sudo ln -s /Users/${whoami}/llama.cpp/main /usr/local/bin/llamacpp
Now update the command and try running it using the symbolic link: llamacpp -m ~/models/mistral-7b-instruct-v0.2.Q5_K_S.gguf --prompt "This is a test, tell me a nerdy coding joke" and you'll get an error that contains this:
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Pro
ggml_metal_init: picking default device: Apple M3 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/sethtucker/llama.cpp/ggml-metal.metal'
ggml_metal_init: error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:3:10: fatal error: 'ggml-common.h' file not found#include "ggml-common.h" ^~~~~~~~~~~~~~~" UserInfo={NSLocalizedDescription=program_source:3:10: fatal error: 'ggml-common.h' file not found
#include "ggml-common.h"
^~~~~~~~~~~~~~~
}
If you git checkout f1a98c52, re-make the binary and then execute the command with the llamacpp link we made here, it works as expected. This release tag is one I'm using as a reference because I have it on one machine where I didn't update from master or a particular release version for some stability since releases come out of this repo insanely fast and I'm not really sure the rhyme/reasoning behind the release pattern.
Is there a way I can configure the binary to know where to look for the correct header file for metal at run-time? I tried taking a stab at doing this and making a PR but it's too far outside of my area of expertise.
The text was updated successfully, but these errors were encountered:
Bug description:
When using the latest release (or master), the
main
binary compiles as expected and executes when the shell is being executed from within the llama.cpp directory. However, the same command being executed from a different directory, using a symbolic link to the binary, does not work and produces an error that metal cannot find thegguf-common.h
file.My 2 test machines:
Machine #1
M3 Pro, 36GB
Machine #2
M1 Max 16" 32GB
Both on OSX 14.4 Sonoma.
This error does not exist on release/tag
f1a98c52
Reproduction Steps
You will need a model already downloaded to test this with. I have tested against several models, this is just the command for 1 example.
clone the repo (if you don't have it already)
cd
into the repo root and runmake
on Linux/MacOS to build the latest binary.From the terminal within the llama.cpp directory root where the
main
binary is, run a model. Example command:./main -m ~/models/mistral-7b-instruct-v0.2.Q5_K_S.gguf --prompt "This is a test, tell me a nerdy coding joke"
. This test should pass and produce output as expected.Now, make a symbolic link into
/usr/local/bin
, I'm going to usellamacpp
as the symbol namesudo ln -s /Users/${whoami}/llama.cpp/main /usr/local/bin/llamacpp
Now update the command and try running it using the symbolic link:
llamacpp -m ~/models/mistral-7b-instruct-v0.2.Q5_K_S.gguf --prompt "This is a test, tell me a nerdy coding joke"
and you'll get an error that contains this:If you
git checkout f1a98c52
, re-make
the binary and then execute the command with thellamacpp
link we made here, it works as expected. This release tag is one I'm using as a reference because I have it on one machine where I didn't update from master or a particular release version for some stability since releases come out of this repo insanely fast and I'm not really sure the rhyme/reasoning behind the release pattern.Is there a way I can configure the binary to know where to look for the correct header file for metal at run-time? I tried taking a stab at doing this and making a PR but it's too far outside of my area of expertise.
The text was updated successfully, but these errors were encountered: