Replies: 3 comments
-
I guess the embedding model used by file parsing and chating is not the same. |
Beta Was this translation helpful? Give feedback.
-
Double checked and thoroughly tested. When using Ollama, if I choose command-r or gemma etc. It gives me the same error. If I use llama3:8b, it always works. It doesn't matter what the default in the system settings or which chat model I choose. right now I'm using GPT40 chat and llama3:8b embedded and it works fine. If I change the embedded to anything else under ollama, and make sure it matches between all configurations... I still get the error. |
Beta Was this translation helpful? Give feedback.
-
The ES only supported no more than 1024 dimension vector. This might be the reason. |
Beta Was this translation helpful? Give feedback.
-
Any time I use a LLM that is not llama3, I get this error: ERROR: BadRequestError(400, 'search_phase_execution_exception', 'failed to create query: [knn] queries are only supported on [dense_vector] fields')
I only choose LLM's on the OLLAMA list. like command-r etc.
And they work great from the command line.
Beta Was this translation helpful? Give feedback.
All reactions