-
-
Notifications
You must be signed in to change notification settings - Fork 17k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature/introducting-conversational-retrieval-tool-agent #2430
feature/introducting-conversational-retrieval-tool-agent #2430
Conversation
…://github.com/apricotbot/Flowise into feature/openai-conversational-retriever-agent
@HenryHengZJ can you please review it? I'd really love to get your feedback about it. Thanks |
@niztal thanks for the suggestion! Question: why do we need the context in system message when user can use retriever tool? From my testing, including the context in system message doesn't work well. For example, I was asking how to install flowise using the following: You can see the correct installation context was placed in the system message, but still it replies I dont know. |
As comparison to using Retriever Tool + Agent, you have much better response: |
Hi @HenryHengZJ, Thanks for your feedback I really appreciate that and I was anticipating that 👍 I was investigating your analysis, before answering your great analysis, I think there was some mis-understanding, please let me know if I miss anything, but based on the langsmith reports you provided seems like you manually set the context within the system message. If that's correct, that was not what I mentioned by initiating this new agent. You need to leave the {context} as a placeholder (same as in the Conversational Retrieval QA Chain). The agent would know to fulfill that once needed by querying the Vector Store and prompting the LLM model. In general, I would say that, this agent's main purpose is to have a bot (RAG) based on Specific knowledge-base that the model (in your case GPT 3.5) was not trained on. In your case GPT3.5 is already familiar with flowise since I guess it was trained on parsing the web. If you would like to have a bot with some specific instructions + specific knowledge-base (a.k.a "context") + executing some tools, at least per my knowledge it is not doable w/o my agent. Thanks, |
The To create a bot that has access to specific knowledge, typically you will use Retriever Tool for that - https://docs.flowiseai.com/use-cases/multiple-documents-qna#agent. And specify instruction in the System Message like:
Including the context in the system message usually dont work well because LLM tends to lose focus as the token increase |
ok I got your point, one small question: Does the system message is part of the LLM's prompt and if so does Flowise/LC send it over and over again each iteration with the model? If so, it's seems pretty expensive from tokens' perspective, isn't it? Thank you. |
The whole conversation including system message is always sent to the OpenAI API as messages array - https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages Its the same as if you were to use OpenAI API directly |
@HenryHengZJ just a quick clarification. The main reason of this PR is because, for some reason, the So, is the I have some people asking me about this on discord too. |
@toi500 we introduced a new tool - Chatflow Tool: Can you try create a seperate chatflow that has |
@HenryHengZJ that is an amazing tool, did not know that you added it. Unfortunately, it doesn't work properly according to my testing. The Of course, I tested if the other flow works and there was not any problem. |
@toi500 please take my commit and test it as well using my agent, I wonder if it'll work for your scenario. I admit I'm not an expert in langchainso I don't know precisely to explain g why but all the tests I've done including the ones offered by @HenryHengZJ , ,unfortunately wasn't accurate enough in comparison to the flow I've done via my agent. |
It works for me but slow to get response from other chain when using chatflow tool. I wish a agent with a capability of QA and tool calling would be really beneficial. |
@soumyabhoi this is exactly what my agent here is doing, you're welcome to try it and let us know. It works for me perfect |
@HenryHengZJ just inform that after a lot of testing, the new However It does not work in my production deploment on
|
yeah I noticed that on Railway too, debugging. But is using Chatflow Tool + Tool Agent fits your purpose? Does it achieves what you wanted to do in first place? Edit: fix for not working on Railway - #2482 |
I was "playing" with this new tool for almost two days, I must say it may produce high value in some cases and sounds like this tool can be very effective for many people.
Again, it may be individually for my own use-case but I rather leave my agent and use it, if you feel like it's redundant for most of the people we can close this PR and I'll use it only on my forked repo. Thanks anyway, highly appreciated |
@niztal, I think your new agent is a great addition to Flowise. I haven't had a chance to use it yet, but I definitely want to. I never quite understood why Chains can't use tools. Maybe it's a limitation of the LangChain framework, but I'm sure there's a good reason for that. In any case, the Chatflow tool, along with LangGraph opens up a whole new paradigm here on Flowise. However, I still think it would be incredibly useful to have either your agent or the Chain tool fixed. (I find the latency of the Chatflow tool a bit too much) Personally, I'm very grateful for all your hard work here to help the comunity and this project. |
@niztal yep totally get it, and really appreciate you putting the effort on creating this new agent! We're working on a new plugin system that allow community nodes, and you will also get the attribution tag on the node. I think thats a better place to put it, as we also want to encourage more community nodes from you guys! Will leave this PR open for now, until we have that plugin feature in place, then we can migrate it over! |
Thanks @HenryHengZJ highly appreciated, that sounds perfect, LMK when this feature is alive. |
I have been desperately waiting for it from last 1 month and i am unable to lunch into production or beta without it. Any alternative will be immense help. Seems like race against time to have an agent that both can chat and call tools. |
@soumyabhoi I guess once we'll have the community nodes that @HenryHengZJ mentioned above it will globally available. For now I'm using it on my own version, I guess if it fits your use case you can do it as well. LMK if you need any help please. Thanks |
@soumyabhoi, until the new plugin is released where the community can add custom nodes (like this one), you can use the new Chatflow Tool that allows the Agent Tool to manage a whole chatflow (where you host your RAG). https://docs.flowiseai.com/integrations/langchain/tools/chatflow-tool |
@niztal we have added community node support #2902 ! can you grant me access to edit your repo, if not can you pull the latest changes, and add a new property |
@HenryHengZJ done ✅ sorry for late reply, I was too busy in other tasks. Please review |
Thanks @niztal ! Reminder: set |
@niztal A conceptual question. Why does this node only import the and not the |
Hi @toi500 TBH I'm not sure why it was long time ago and the current state answered my needs back then. Feel free to contribute and add whatever you need. Thanks |
@niztal if you dont mind I am going to check it out how it performs with the original prompt structure and I will report back. |
The ConversationalQARetriverChain is a very helpful chain in order to have a RAG based on VectorStore Retriever.
One of his major capabilities are querying the vectorstore by the user's input and calling the LLM using the query's results as a placeholder of {context} and combining this {context} as part of the LLM's prompt.
But since it's just a chain it doesn't have the capability to combine Tools.
The only options are (as described over Bug & Question)
The 3rd option works the best 🚀
@HenryHengZJ please review I would love to get any feedback from you or any other 🙏