-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable cuda options in llama.cpp and bump its version #24683
base: master
Are you sure you want to change the base?
Conversation
This comment has been minimized.
This comment has been minimized.
I am actually trying to get some help from former contributer of llama.cpp recipe, as I found some issues of compiling ggml library it self is built ok, but the compiler complain about missing symbols of cuda for test_pakcage binary with both newer version and the former version of b3040. Here is the complte logs: llama-cpp.log Do you guys find any problems after |
Hi @RobinQu , thanks for your contribution. I suspect the issues you are experiencing are due to the following in the
it should probably be Edit: on second thought, it may need to be its own component |
Hi @RobinQu, |
This comment has been minimized.
This comment has been minimized.
Tried to update components of |
I tried code on your branch and llama.cpp won't build, but it should be the problem introduced in BTW, build log is attached below. I think the |
I suspect there maybe some env issues on my server. However I am testing the recipe in docker container with official |
For |
Hi @RobinQu, |
Conan v1 pipeline ❌Failure in build 3 (
Note: To save resources, CI tries to finish as soon as an error is found. For this reason you might find that not all the references have been launched or not all the configurations for a given reference. Also, take into account that we cannot guarantee the order of execution as it depends on CI workload and workers availability. Conan v2 pipeline ❌
The v2 pipeline failed. Please, review the errors and note this is required for pull requests to be merged. In case this recipe is still not ported to Conan 2.x, please, ping Failure in build 3 (
Note: To save resources, CI tries to finish as soon as an error is found. For this reason you might find that not all the references have been launched or not all the configurations for a given reference. Also, take into account that we cannot guarantee the order of execution as it depends on CI workload and workers availability. |
Summary
Changes to recipe: llama.cpp
Motivation
Details
conandata.yml
, I add source package and its sha256 checksum of b3438.conanfile.py
, I addcuda
options and setFalse
as its default. If it's enabled,LLAMA_CUDA
will be added to trigger compilation with CUDA in original CMakefiles.txt of llama.cpp.