-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to load models with Llamacpp #686
Comments
I was facing the same error, install llama-cpp-python==0.2.26 and it should work! |
Using an older llama-cpp version works but limits the usage of some newer models. For example, you can't load |
I think this issue is resolved with PR #665, but there hasn't been a release since then. |
Should we expect a release any time soon or should I simply cherry pick the change into a local fork? EDIT: @paulbkoch maybe consider offering a nightly version on pypi that reflects the latest state of development without requiring us to pip install from github? |
Any hope for a release to pypi soon with this fix? |
Is the pull request merged for installing directly from github? Or is there a way to install directly from github and pull in the pull request at the same time? |
The bug
A clear and concise description of what the bug is.
To Reproduce
Give a full working code snippet that can be pasted into a notebook cell or python file. Make sure to include the LLM load step so we know which model you are using.
System info (please complete the following information):
guidance.__version__
):The text was updated successfully, but these errors were encountered: