-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HelloGGML_ASSERT: ggml-metal.m:539: false && "not implemented" #1693
Comments
I am also getting this error. |
This seems to be a duplicate of #1697. Are you trying to load anything other than a q4_0 quantized model? This is not supported yet for metal (only q4_0 is supported right now). |
I am getting nonsense when using Metal. Without it the models perform normally. Is anyone else seeing this? Here is my output: (base) adam@adams-mbp bin % ./main -m /Users/adam/Documents/Projects/langchainstufff/llama.cpp/models/guanaco-7B.ggmlv3.q4_0.bin -p "I believe the meaning of life is " --ignore-eos -ngl 1 system_info: n_threads = 6 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | I believe the meaning of life is 4ceratypenameculgeuenani |
I think my issue is I am not on apple silicon? I have an intel cpu MacBook Pro 2018 |
Getting same thing, and am on Apple Silicon (m1 max, 32 core). Afaik I compiled everything correctly with |
Run |
@ggerganov - I have tried on latest master (590250f) and afaik still seems to be failing for me with same error as here. I have tried a few commits ago, right after the initial Metal implementation was merged (d1f563a), and my same steps are working there correctly, so I think something broke in the subsequent commits. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Please provide a detailed written description of what you were trying to do, and what you expected
llama.cpp
to do.Current Behavior
Please provide a detailed written description of what
llama.cpp
did, instead.Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu
$ uname -a
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Failure Logs
Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.
Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.
Example environment info:
Example run with the Linux command perf
The text was updated successfully, but these errors were encountered: