Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

“<method 'append' of 'list' objects> returned a result with an exception set” #1006

Closed
GluttonousCat opened this issue Dec 13, 2023 · 6 comments

Comments

@GluttonousCat
Copy link

I encountered this error when using the embedding function "llama.create_embedding", under this code. abstract is a text about 700 tokens, when i use the previous version 0.1.83, it can run. now the version is 0.2.22

llama = Llama(model_path='./llama-2-7b.Q4_K_M.gguf', embedding=True, n_ctx=2048, n_gpu_layers=30)
abstract_embedding = llama.create_embedding(input=abstract).get('data')[0].get('embedding')

this is the report

Traceback (most recent call last):
  File "C:\Users\AI\PycharmProjects\roberta-gat\demo.py", line 107, in <module>
    get_abstract_embedding(path='data/validation.csv', start=0)
  File "C:\Users\AI\PycharmProjects\roberta-gat\demo.py", line 50, in get_abstract_embedding
    abstract_embedding = llama.create_embedding(input=abstract).get('data')[0].get('embedding')
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\AI\.conda\envs\roberta-gat\Lib\site-packages\llama_cpp\llama.py", line 1309, in create_embedding
    data.append(
SystemError: <method 'append' of 'list' objects> returned a result with an exception set

This is the source code , and I find the value of llama_cpp.llama_get_embeddings(self._ctx.ctx) maybe is null?

for index, input in enumerate(inputs):
    tokens = self.tokenize(input.encode("utf-8"), special=True)
    self.reset()
    self.eval(tokens)
    n_tokens = len(tokens)
    total_tokens += n_tokens
    embedding = llama_cpp.llama_get_embeddings(self._ctx.ctx)[
        : llama_cpp.llama_n_embd(self._model.model)
    ]

    data.append(
        {
            "object": "embedding",
            "embedding": embedding,
            "index": index,
        }
    )
@CharlieChenDSS
Copy link

got the same error with llama-cpp-python v0.2.22 :

  File "/Users/xxxx/opt/anaconda3/envs/langchain_python_v3.9/lib/python3.9/site-packages/llama_cpp/llama.py", line 1309, in create_embedding
    data.append(
SystemError: <method 'append' of 'list' objects> returned a result with an error set

@CharlieChenDSS
Copy link

@abetlen could you please help look into this issue when you get a chance? Thanks.
It could be raised from below code block, function create_embedding of llama.py
embedding = llama_cpp.llama_get_embeddings(self._ctx.ctx)[ : llama_cpp.llama_n_embd(self._model.model) ]

@GaryZhen
Copy link

I meet that problem as well. I am using Chroma, which asks me to specify the embedding function.
I have to use llama-cpp-embedding from langchain. However, it always shows this problem.

brandonrobertz added a commit to brandonrobertz/llama-cpp-python that referenced this issue Dec 17, 2023
This addresses two issues:

 - abetlen#995 which just requests to add the KV cache offloading param
 - abetlen#1006 a NULL ptr exception when using the embeddings (introduced by
   leaving f16_kv in the fields struct)
@brandonrobertz
Copy link
Contributor

I was able to replicate this and was able to fix on my end. Does this PR also fix for y'all? #1019

brandonrobertz added a commit to brandonrobertz/llama-cpp-python that referenced this issue Dec 17, 2023
This addresses two issues:

 - abetlen#995 which just requests to add the KV cache offloading param
 - abetlen#1006 a NULL ptr exception when using the embeddings (introduced by
   leaving f16_kv in the fields struct)
brandonrobertz added a commit to brandonrobertz/llama-cpp-python that referenced this issue Dec 17, 2023
F16_KV appears to have been removed here: ggml-org/llama.cpp@af99c6f

This addresses two issues:

 - abetlen#995 which just requests to add the KV cache offloading param
 - abetlen#1006 a NULL ptr exception when using the embeddings (introduced by
   leaving f16_kv in the fields struct)
@zakki
Copy link

zakki commented Dec 18, 2023

Thank you! I had same problem and 1019 fixed that.

abetlen pushed a commit that referenced this issue Dec 18, 2023
F16_KV appears to have been removed here: ggml-org/llama.cpp@af99c6f

This addresses two issues:

 - #995 which just requests to add the KV cache offloading param
 - #1006 a NULL ptr exception when using the embeddings (introduced by
   leaving f16_kv in the fields struct)
@GluttonousCat
Copy link
Author

I was able to replicate this and was able to fix on my end. Does this PR also fix for y'all? #1019

Thank you! i fixed that error by #1019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants