-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ainsert()的时候维度始终不对 #75
Comments
可以尝试新建一个工作路径再次运行 |
不行,还是这个错误,是有哪里设置为1024了吗 |
给我看看log吧,还有具体的代码 |
同样的报错,清除工作路径缓存无效:(cail) (base) dell@dell-PowerEdge-R750:~/test/LexiLaw-main/demo$ /home/dell/anaconda3/envs/cail/bin/python /home/dell/test/LexiLaw-main/demo/LightRAG-main/examples/lightrag_openai_compatible_demo.py INFO:lightrag:Load KV full_docs with 0 data |
INFO:httpx:HTTP Request: POST http://192.168.31.100:3001/v1/chat/completions "HTTP/1.1 200 OK" INFO:lightrag:Load KV full_docs with 0 data |
import os WORKING_DIR = "./aka" if not os.path.exists(WORKING_DIR): async def llm_model_func( async def embedding_func(texts: list[str]) -> np.ndarray: function testasync def test_funcs():
asyncio.run(test_funcs()) exit()rag = LightRAG( with open("./book.txt") as f: Perform naive searchprint( Perform local searchprint( Perform global searchprint( Perform hybrid searchprint( |
类似报错,求大佬解答:《 |
+1 |
Solved this problem by modify the embedding_dim in In my case, I changed the default embedding model to
|
I resolved this problem by changing the
In this case, change |
可能是embedding_dim没设置对,查询时的emb_dim和你的本地存储的dim不一样 |
[hotfix-#75][embedding] Fix the potential embedding problem
使用源代码中的examples来进行尝试,不论是使用ollama,还是openai,或者是openai兼容的接口,都会报同样的错误:
提示:
/LightRAG/lightrag/lightrag.py", line 162, in insert
return loop.run_until_complete(self.ainsert(string_or_strings))
all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 768 and the array at index 1 has size 1024
The text was updated successfully, but these errors were encountered: