-
Notifications
You must be signed in to change notification settings - Fork 372
Provide tool error feedback agent #431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide tool error feedback agent #431
Conversation
…be punished when it commits mistakes
|
I just don´t know a good way to fix the breaking integration tests. It seems to be an issue with the tests themselves, not the application code |
|
Cool - thanks - this was needed. |
Yes, of course. If you could create a new tag it would be helpful, as my environment is set to use uvx to install the latest version by default |
|
I'm releasing now (just completing the e2e suite before pushing the button). |
|
0.3.15 is on pypi now including this PR. This I think is relevant too: modelcontextprotocol/modelcontextprotocol#1303 likely to be included in spec. |
| for content in tool_result.content | ||
| ] | ||
| if tool_result_contents: | ||
| if result.content is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By Looking at this, I imagine this is line is unnecessary, right?
Great, will test this |
Today, when a tool produces an error, tool agent silently drops it and does not return the result to the AppAgent and, therefore, the user cannot send custom messages to punish the AI when the tool was misused.
This PR aims to return the fail to serve as a feedback to the AppAgent and can be passed as a custom message to the LLM or take any other decision based on the error details.