-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to implement CLBLAST ? #1433
Comments
It should be possible. It needs to have a file The commands you can see in build.yml. Sorry, it may be a little hard to understand this stuff if you are not a developer. |
Thanks ! What is llama.dll ? and BUILD_Shared_LIBS=on not work (llama.dll dosnt appear). |
llama-cpp-python needs a library form of llama.cpp which on windows would be in a file called But you could ask the lllama_cpp_python maintainers to do this. |
Thanks ! |
Ok !!! After long and long time finaly get the llama.dll (very hard a lot of error, it's not very simple ...) llama.cpp: loading model from D:\ia\ia\ggml-model-q4_1.bin
llama_model_load_internal: format = ggjt v2 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 3 (mostly Q4_1)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 90.75 KB
llama_model_load_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)
Initializing CLBlast (First Run)...
Attempting to use: Platform=0, Device=0 (If invalid, program will crash)
Using Platform: w�U Device: ��(�
OpenCL clCreateContext error -33 at D:\ia\ia\llama.cpp\ggml-opencl.c:213 #The error is here ! what I have to do ? |
This doesn't look good. Recently the OpenCL device selection logic changed, maybe it is better for you now? Just in case, you should also try the version from the Releases page. |
There are instructions on llama-cpp-python on how to install it with CUDA or CLBlast: https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast |
I know how to install clblasy it’s okay thanks 😄 |
Hi, I have build the latest llama.cpp with opencl on Windows 11 with my Vega VII. It does say it uses my gpu in the output, but actually uses my cpu for all calculations
my settings:
any ideas? |
|
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Hey ! I want to implement CLBLAST to use llama.cpp with my AMD GPU but I dont how to do it !
Can you explain me ? How to use it with llama-cpp-python ?
PS: I'm on windows ... My linux is bad...
Thanks in advance !
Labo
The text was updated successfully, but these errors were encountered: