Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docmentation for agents #1057

Merged
merged 50 commits into from
Jun 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
9997aa6
add agent notebook and documentation
qingyun-wu May 25, 2023
6efb955
Merge branch 'main' into agent_doc
qingyun-wu May 25, 2023
db3eeff
fix bug
sonichi May 25, 2023
c634fe2
set flush to True when printing msg in agent
qingyun-wu May 25, 2023
ca26990
Merge branch 'main' into agent_doc
qingyun-wu May 25, 2023
de1b9e7
add a math problem in agent notebook
qingyun-wu May 26, 2023
38151f8
remove
qingyun-wu May 26, 2023
0f82874
header
qingyun-wu May 26, 2023
60ed010
improve notebook doc
qingyun-wu May 26, 2023
74fb459
notebook update
sonichi May 26, 2023
a9aa9f9
improve notebook example
qingyun-wu May 26, 2023
cf36470
improve doc
qingyun-wu May 26, 2023
a400c53
improve notebook doc
qingyun-wu May 27, 2023
e6e3fc4
improve print
qingyun-wu May 27, 2023
6f27559
Merge branch 'main' into agent_doc
qingyun-wu May 27, 2023
ff462e4
doc
qingyun-wu May 27, 2023
5dbee0c
human_input_mode
qingyun-wu May 27, 2023
2da26bb
human_input_mode str
qingyun-wu May 27, 2023
d28892c
indent
qingyun-wu May 27, 2023
f9c1770
indent
qingyun-wu May 27, 2023
2b67168
Merge branch 'main' into agent_doc
qingyun-wu May 27, 2023
3422bdc
Update flaml/autogen/agent/user_proxy_agent.py
qingyun-wu May 27, 2023
9ab7379
Update notebook/autogen_agent.ipynb
qingyun-wu May 27, 2023
04b2cfb
Update notebook/autogen_agent.ipynb
qingyun-wu May 27, 2023
d9aebc6
Update notebook/autogen_agent.ipynb
qingyun-wu May 27, 2023
447a204
add agent doc
qingyun-wu May 27, 2023
b4692c4
Merge remote-tracking branch 'origin/main' to doc
qingyun-wu May 28, 2023
e39a641
del old files
qingyun-wu May 28, 2023
3ea6372
remove chat
qingyun-wu May 29, 2023
0daa392
agent doc
qingyun-wu May 29, 2023
55b1826
remove chat_agent
qingyun-wu May 29, 2023
fe39e97
naming
qingyun-wu May 30, 2023
d880b8e
improve documentation
qingyun-wu Jun 11, 2023
868adc1
Merge branch 'main' into doc
qingyun-wu Jun 11, 2023
84383ef
wording
qingyun-wu Jun 11, 2023
611435d
Merge branch 'doc' of https://github.com/microsoft/FLAML into doc
qingyun-wu Jun 11, 2023
43a2f59
improve agent doc
qingyun-wu Jun 11, 2023
bb07897
wording
qingyun-wu Jun 11, 2023
6514972
general auto reply
qingyun-wu Jun 11, 2023
a871f59
update agent doc
qingyun-wu Jun 13, 2023
1ebfbc2
Merge branch 'main' into doc
qingyun-wu Jun 13, 2023
f5a5b9d
Merge branch 'doc' of https://github.com/microsoft/FLAML into doc
qingyun-wu Jun 13, 2023
a3c82af
human input mode
qingyun-wu Jun 14, 2023
8c44bad
add agent figure
qingyun-wu Jun 14, 2023
63fa5ad
update agent figure
qingyun-wu Jun 14, 2023
805ad25
update agent example figure
qingyun-wu Jun 14, 2023
da7a82e
Merge branch 'main' into doc
qingyun-wu Jun 14, 2023
3094800
update code example
qingyun-wu Jun 14, 2023
203e671
Merge branch 'doc' of https://github.com/microsoft/FLAML into doc
qingyun-wu Jun 14, 2023
17b08b1
extensibility of UserProxyAgent
qingyun-wu Jun 14, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 0 additions & 25 deletions flaml/autogen/agent/chat_agent.py

This file was deleted.

32 changes: 0 additions & 32 deletions test/autogen/test_user_proxy_agent.py

This file was deleted.

55 changes: 53 additions & 2 deletions website/docs/Use-Cases/Auto-Generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -382,8 +382,59 @@ The compact history is more efficient and the individual API call history contai

[`flaml.autogen.agents`](../reference/autogen/agent/agent) contains an experimental implementation of interactive agents which can adapt to human or simulated feedback. This subpackage is under active development.

*Interested in trying it yourself? Please check the following notebook example:*
* [Use agents in FLAML to perform tasks with code](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb)
We have designed different classes of Agents that are capable of communicating with each other through the exchange of messages to collaboratively finish a task. An agent can communicate with other agents and perform actions. Different agents can differ in what actions they perform in the `receive` method.

### `AssistantAgent`

`AssistantAgent` is an Agent class designed to act as an assistant by responding to user requests. It could write Python code (in a Python coding block) for a user to execute when a message (typically a description of a task that needs to be solved) is received. Under the hood, the Python code is written by LLM (e.g., GPT-4).

### `UserProxyAgent`
`UserProxyAgent` is an Agent class that serves as a proxy for the human user. Upon receiving a message, the UserProxyAgent will either solicit the human user's input or prepare an automatically generated reply. The chosen action depends on the settings of the `human_input_mode` and `max_consecutive_auto_reply` when the `UserProxyAgent` instance is constructed, and whether a human user input is available.

Currently, the automatically generated reply is crafted based on automatic code execution. The `UserProxyAgent` triggers code execution automatically when it detects an executable code block in the received message and no human user input is provided. We plan to add more capabilities in `UserProxyAgent` beyond code execution. One can also easily extend it by overriding the `auto_reply` function of the `UserProxyAgent` to add or modify responses to the `AssistantAgent`'s specific type of message. For example, one can easily extend it to execute function calls to external API, which is especially useful with the newly added [function calling capability of OpenAI's Chat Completions API](https://openai.com/blog/function-calling-and-other-api-updates?ref=upstract.com). This auto-reply capability allows for more autonomous user-agent communication while retaining the possibility of human intervention.

Example usage of the agents to solve a task with code:
```python
from flaml.autogen.agent import AssistantAgent, UserProxyAgent

# create an AssistantAgent instance named "assistant"
assistant = AssistantAgent(name="assistant")

# create a UserProxyAgent instance named "user_proxy"
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER", # in this mode, the agent will never solicit human input but always auto reply
max_consecutive_auto_reply=10, # the maximum number of consecutive auto replies
is_termination_msg=lambda x: x.rstrip().endswith("TERMINATE") or x.rstrip().endswith('"TERMINATE".'), # the function to determine whether a message is a termination message
work_dir=".",
)

# the assistant receives a message from the user, which contains the task description
assistant.receive(
"""What date is today? Which big tech stock has the largest year-to-date gain this year? How much is the gain?""",
user_proxy,
)
```
In the example above, we create an AssistantAgent named "assistant" to serve as the assistant and a UserProxyAgent named "user_proxy" to serve as a proxy for the human user.
1. The assistant receives a message from the user_proxy, which contains the task description.
2. The assistant then tries to write Python code to solve the task and sends the response to the user_proxy.
3. Once the user_proxy receives a response from the assistant, it tries to reply by either soliciting human input or preparing an automatically generated reply. In this specific example, since `human_input_mode` is set to `"NEVER"`, the user_proxy will not solicit human input but prepare an automatically generated reply (auto reply). More specifically, the user_proxy executes the code and uses the result as the auto-reply.
4. The assistant then generates a further response for the user_proxy. The user_proxy can then decide whether to terminate the conversation. If not, steps 3 and 4 are repeated.

Please find a visual illustration of how UserProxyAgent and AssistantAgent collaboratively solve the above task below:
![Agent Example](images/agent_example.png)

sonichi marked this conversation as resolved.
Show resolved Hide resolved
Notes:
- Under the mode `human_input_mode="NEVER"`, the multi-turn conversation between the assistant and the user_proxy stops when the number of auto-reply reaches the upper limit specified by `max_consecutive_auto_reply` or the received message is a termination message according to `is_termination_msg`.
- When `human_input_mode` is set to `"ALWAYS"`, the user proxy agent solicits human input every time a message is received; and the conversation stops when the human input is "exit", or when the received message is a termination message and no human input is provided.
- When `human_input_mode` is set to `"TERMINATE"`, the user proxy agent solicits human input only when a termination message is received or the number of auto reply reaches `max_consecutive_auto_reply`.

sonichi marked this conversation as resolved.
Show resolved Hide resolved
*Interested in trying it yourself? Please check the following notebook examples:*
* [Interactive LLM Agent with Auto Feedback from Code Execution](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb)

* [Interactive LLM Agent with Human Feedback](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent_human_feedback.ipynb)

* [Interactive LLM Agent Dealing with Web Info](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent_web_info.ipynb)

## Utilities for Applications

Expand Down
Binary file added website/docs/Use-Cases/images/agent_example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading