-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Precompiled wheels #440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@ParisNeo I don't think there's a way, llama-cpp optimizes the code during build-time based on your CPU. |
@ParisNeo there are pre-built wheels that I originally set up for the textgeneration-webui guys to simplify their installation for windows users, those are in the releases. The issue is that as @gaby points out, without compiling on the host system llama.cpp will be slow because there's no way to map wheels to the optimizations used in llama.cpp. One solution we're working on right now is docker images that build on startup (with optional caching), does that work for you? |
Related to #243 |
@abetlen Is this done? |
Hi. I am sorry for being so late, this was lost ins my old issues. Hhhh |
Hi,
Is there a way to have multiple precompiled wheels for this library?
I use it on multiple platforms and it fails to install so often because of build errors
Best regards
The text was updated successfully, but these errors were encountered: