Skip to content

Conversation

yanxi0830
Copy link
Contributor

@yanxi0830 yanxi0830 commented Feb 4, 2025

What does this PR do?

Changes
✅ Bugfix ToolResponseMessage role
✅ Add ReACT default prompt + default output parser
✅ Add ReACTAgent wrapper
🚧 Remove ClientTool and simplify it as a decorator (separate PR, including llama-stack-apps)
✅ Make agent able to return structured outputs

  • Note that some remote provider do not support response_format structured outputs, add it as an optional flag when calling ReActAgent wrapper.

Test Plan

see test in llamastack/llama-stack-apps#166

Sources

Please link relevant resources if necessary.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Ran pre-commit to handle lint / formatting issues.
  • Read the contributor guideline,
    Pull Request section?
  • Updated relevant documentation.
  • Wrote necessary unit or integration tests.

Copy link
Contributor

@hardikjshah hardikjshah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this change, a couple of comments but looks good (Y)

@yanxi0830 yanxi0830 merged commit 9dda45e into main Feb 5, 2025
2 checks passed
@yanxi0830 yanxi0830 deleted the react_agent branch February 5, 2025 23:24
react_output = ReActOutput.model_validate_json(response_text)
except ValidationError as e:
print(f"Error parsing action: {e}")
return output_message
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the turn just terminate after this point?

Copy link
Contributor Author

@yanxi0830 yanxi0830 Feb 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the turn will terminate after this point as there's no tool calls in Agent. We can override orchestration in ReActAgent to continue the loop and think again if tool call is not being correctly parsed until "answer" is reached.

This was referenced Feb 5, 2025
# by default, we stop after the first turn
stop = True
for chunk in response:
self._process_chunk(chunk)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's cleaner to only have the override as get_tool_call(chunk) instead of a generic output parser. This way:

  1. It's clearer what the user is supposed to override
  2. We can actually simplify the logic below as:
# the default tool_call_getter just return `chunk...tool_calls`
tool_call = self.tool_call_getter.get_tool_call(chunk)
if not tool_call:
    yield chunk
    return
 else:
    # run tool
  1. Bonus: we can also be more functional and not overwrite chunk with the parsed tool calls.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ehhuang +100 especially the (3) bonus

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some update in: #130

However, we still need to overwrite chunk with parsed tool calls, as ClientTool.run takes in a message history and expect the ToolCall detail in the last message.

yanxi0830 added a commit that referenced this pull request Feb 5, 2025
# What does this PR do?

- address comments in
#121


## Test Plan

- see llamastack/llama-stack-apps#166

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
@yanxi0830 yanxi0830 mentioned this pull request Feb 7, 2025
5 tasks
yanxi0830 added a commit that referenced this pull request Feb 7, 2025
# What does this PR do?

- See discussion in
#121 (comment)

## Test Plan

test with llamastack/llama-stack-apps#166

```
LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v tests/client-sdk/agents/test_agents.py::test_override_system_message_behavior --inference-model "meta-llama/Llama-3.3-70B-Instruct"
```
<img width="1697" alt="image"
src="https://github.com/user-attachments/assets/c036cbf6-9fc1-4064-82af-fa1984300653"
/>


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants