-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
“<method 'append' of 'list' objects> returned a result with an exception set” #1006
Comments
got the same error with llama-cpp-python v0.2.22 :
|
I meet that problem as well. I am using Chroma, which asks me to specify the embedding function. |
brandonrobertz
added a commit
to brandonrobertz/llama-cpp-python
that referenced
this issue
Dec 17, 2023
This addresses two issues: - abetlen#995 which just requests to add the KV cache offloading param - abetlen#1006 a NULL ptr exception when using the embeddings (introduced by leaving f16_kv in the fields struct)
I was able to replicate this and was able to fix on my end. Does this PR also fix for y'all? #1019 |
brandonrobertz
added a commit
to brandonrobertz/llama-cpp-python
that referenced
this issue
Dec 17, 2023
This addresses two issues: - abetlen#995 which just requests to add the KV cache offloading param - abetlen#1006 a NULL ptr exception when using the embeddings (introduced by leaving f16_kv in the fields struct)
brandonrobertz
added a commit
to brandonrobertz/llama-cpp-python
that referenced
this issue
Dec 17, 2023
F16_KV appears to have been removed here: ggml-org/llama.cpp@af99c6f This addresses two issues: - abetlen#995 which just requests to add the KV cache offloading param - abetlen#1006 a NULL ptr exception when using the embeddings (introduced by leaving f16_kv in the fields struct)
Thank you! I had same problem and 1019 fixed that. |
abetlen
pushed a commit
that referenced
this issue
Dec 18, 2023
F16_KV appears to have been removed here: ggml-org/llama.cpp@af99c6f This addresses two issues: - #995 which just requests to add the KV cache offloading param - #1006 a NULL ptr exception when using the embeddings (introduced by leaving f16_kv in the fields struct)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I encountered this error when using the embedding function "llama.create_embedding", under this code. abstract is a text about 700 tokens, when i use the previous version 0.1.83, it can run. now the version is 0.2.22
this is the report
This is the source code , and I find the value of llama_cpp.llama_get_embeddings(self._ctx.ctx) maybe is null?
The text was updated successfully, but these errors were encountered: