-
Notifications
You must be signed in to change notification settings - Fork 1k
Issues: abetlen/llama-cpp-python
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Unable to Build llama-cpp-python with Vulkan (Core Dump on Model Load)
#1923
opened Feb 6, 2025 by
Talnz007
See
No CUDA toolset found
log after -- CUDA Toolkit found.
log
#1917
opened Feb 3, 2025 by
TomaszDlubis
RPC is broken due to change of interface in llama.cpp main repository (rpc : early register backend devices #11262)
#1914
opened Jan 30, 2025 by
j-lag
pip install llama-cpp-python got stuck forever at "Configuring CMake" in docker
#1908
opened Jan 24, 2025 by
jiafatom
openai API
max_completion_tokens
argument is ignored
#1907
opened Jan 24, 2025 by
BenjaminMarechalEVITECH
4 tasks done
Add minicpm-o and qwen2-vl to the list of supported multimodal models.
#1904
opened Jan 24, 2025 by
kseyhan
OSError: exception: access violation reading 0x0000000000000000
#1903
opened Jan 24, 2025 by
andretisch
Previous Next
ProTip!
What’s not been updated in a month: updated:<2025-01-06.