Replies: 1 comment 1 reply
-
Hi @nasreekar I think what you're facing is the difference between RAG, Intent Detection and Plugins execution. RAG is a specific action to search for an aswer, assuming the user sent a question. However, often users can be just chatting, or asking for something else.
I hope this helps and allows you to make some progress. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I'm working on a project using KM with a react frontend chat UI. I've set up the repository as a local service (functioning as a backend using dotnet run - i believe serverless) and developed a frontend application using react to interact with it. I've converted the bash scripts for uploading and querying (upload.sh, ask.sh) into API calls (/upload, /ask) for the frontend to use with request bodies. Currently, everything works as expected for queries related to the ingested documents.
However, I'm encountering an issue when trying to obtain information outside the scope of the ingested documents. Is this the expected behavior?
For example, I uploaded a document containing invoice amounts in a specific currency format. When I ask the system to convert this to a different currency, I receive a "No relevant memories found" response because it's calling the /ask endpoint. Is there a way to utilize an LLM for these types of queries rather than relying solely on API responses?
I'm looking for guidance on how to enable LLM functionality for queries that go beyond the content of the ingested documents. Any advice or suggestions would be greatly appreciated.
Thank you for your time and assistance.
Beta Was this translation helpful? Give feedback.
All reactions