-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Investigate] Custom llama.dll
Dependency Resolution Issues on Windows
#12
Comments
For me I could not even get past this # TODO: fragile, should fix
_base_path = pathlib.Path(__file__).parent
(_lib_path,) = chain(
_base_path.glob("*.so"), _base_path.glob("*.dylib"), _base_path.glob("*.dll")
)](url) I get this error (_lib_path,) = chain(
ValueError: not enough values to unpack (expected 1, got 0) So I just commented it out and did this instead _lib_path = "C:\path\to\my\llama.dll" |
Okay, yep. Actually just made a pr working to make libs easier to work with. Will be better soon :) Thanks for sharing |
@geocine I've merged in @MillionthOdin16 fix for library loading, does the default installation path work now for you? |
Thanks @abetlen , I will try again later |
I think there might be a combination of 2 things that make the lib setup confusing at the moment:
*Just a note, the |
@MillionthOdin16 ahh that makes sense, I currently only have access to Linux and one Mac system so I haven't had a chance to test on Windows. The shared library is supposed to build when you install the package from pip. I'm using It seems scikit-build allows you to inject build arguments to cmake (I assume at build time after you determine the platform) see the documentation here. This may be a solution for Windows users but unfortunately I can't really help on this front. |
I describe the behavior here #30 (comment). Once we resolve that, this is resolved as well. |
Resolved in 0fd3204 |
This is a note for using a custom
llama.dll
build on Windows. I ran into dependency resolution issues with loading my ownllama.dll
compiled with BLAS support and some extra hardware specific optimization flags. No matter what I do, it can't seem to locate all of its dependencies, even though I've tried placing them in system paths and even same dir.My current workaround is using the default
llama.dll
that llama-cpp-python builds, but it doesn't have the hardware optimizations and BLAS compatibility that I enabled in my custom build. So, I'm still trying to figure out what my issue is. Maybe something python specific that i'm missing...I'm dropping this issue here just in case anyone else runs into something similar. If you have any ideas or workarounds, let me know. I'll keep trying to figure it out until I get it resolved haha :)
The text was updated successfully, but these errors were encountered: