-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use cmake instead of cc
#221
Conversation
seems metal_hack is still needed |
# Conflicts: # llama-cpp-sys-2/build.rs
metal hack complete |
this should now fix #219 |
} else { | ||
"OFF" | ||
}) | ||
.define("BUILD_SHARED_LIBS", "ON") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be appreciated to have a static
feature for those who prefer static linking.
for build in [&mut ggml, &mut llama_cpp] { | ||
let compiler = build.get_compiler(); | ||
|
||
if cfg!(target_arch = "i686") || cfg!(target_arch = "x86_64") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that by default the CMake build will assume target-cpu=native
(LLAMA_NATIVE
), so you'll lose some portability by removing this. I don't mind making a PR to adapt this feature detection to the CMake build after this is merged, if you'd prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can re-add the feature detection (doesn't seem fair to delete your code and expect you to add it back!)
How does the feature detection as we had it differ from targeting native as exists in the cmake file?
What sort of portability do we lose out on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous feature detection worked based on the target's features, e.g., if you want to ensure almost any modern CPU can run your software you can use -Ctarget-cpu=ivybridge
as a compiler flag to enable just avx
and sse3
, which any CPU made post 2012 should support.
With the LLAMA_NATIVE
flag it depends on the CPU of the one compiling the software, so it will enable all features and ignore your -Ctarget-cpu=ivybridge
setting. Or in turn, it might not enable AVX2 if it was compiled on an older CPU.
To replicate the old behaviour it's just a case of passing LLAMA_AVX/2=ON/OFF
etc, to the cmake build instead of the compiler specific flags (and turn off LLAMA_NATIVE
). Similar to what llama.cpp
does in their CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that seems worth the effort. You're welcome to make a PR to this branch (won't be merged for a while).
Otherwise I'll make sure it's done before this gets merged into main.
bear bones implementation of swapping over to
cmake
.I ran into some dastardly issues with doctests not linking shared libraries, it been fixed in the latest beta so I will not merge this until that's stable (2 May, 2024)
Time permitting I will work on testing Metal.worksTODO:
cc
#221 (comment)cc
#221 (comment)