Skip to content

Commit

Permalink
Merge pull request #76 from alipay/dev_chongshi
Browse files Browse the repository at this point in the history
feat: add demo discussion agents in agentUniverse.
  • Loading branch information
LandJerry authored Jun 13, 2024
2 parents 50c166c + a53b057 commit 4fc1dc5
Show file tree
Hide file tree
Showing 15 changed files with 162 additions and 41 deletions.
4 changes: 2 additions & 2 deletions agentuniverse/agent/plan/planner/planner.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,10 @@ def handle_memory(self, agent_model: AgentModel, planner_input: dict) -> ChatMem
chat_history: list = planner_input.get('chat_history')
memory_name = agent_model.memory.get('name')
llm_model = agent_model.memory.get('llm_model') or dict()
llm_name = llm_model.get('name')
llm_name = llm_model.get('name') or agent_model.profile.get('llm_model').get('name')

messages: list[Message] = generate_messages(chat_history)
llm: LLM = LLMManager().get_instance_obj(llm_name) if llm_name else None
llm: LLM = LLMManager().get_instance_obj(llm_name)
params: dict = dict()
params['messages'] = messages
params['llm'] = llm
Expand Down
118 changes: 118 additions & 0 deletions docs/guidebook/en/6_2_1_Discussion_Group.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
# Discussion: Multi-Agent Discussion Group
In this use case, we will demonstrate how to use agentUniverse for multi-agent discussions.

There are two agent roles: discussion group host (one) and discussion group participants (several).

The user initiates a discussion topic, and the host organizes the participants to start the discussion. In each round, each participant expresses his or her own views according to the topic of the discussion (the views of each participant are not immutable and will be constantly adjusted as the discussion goes on). After several rounds of discussion, the host will summarize the discussion process and return the results of the participants after several rounds of discussion to the user.

Many heads are better than one, imagine that you have a question you want to answer and submit it directly to the agentUniverse discussion group, The host will organize multiple agents with different ideas together, focus on your question, and finally give you a comprehensive and intelligent answer, which is an interesting agent experiment.

## Quick Start
### Configure API Key
For example, configure key information in the file `custom_ key.toml` in which agentUniverse manages private key configuration (The agentUniverse discussion group uses GPT as the base model and Serper as the Google search tool by default. The following describes how to use other models or tools).
```toml
[KEY_LIST]
# serper google search key
SERPER_API_KEY='xxx'
# openai api key
OPENAI_API_KEY='xxx'
```
### Modify Prompt Version
Modify the prompt version of agents to the english version, the discussion group multi-agent configuration file is located in the app/core/agent/discussion_agent_case directory in the `sample_standard_app` sample project. Find the host_agent.yaml and change `prompt_version` to `host_agent.en`, similarly find the participants yaml file and change the `prompt_version` to `participant_agent.en`.

```yaml
info:
name: 'host_agent'
description: 'host agent'
profile:
prompt_version: host_agent.en
llm_model:
name: 'demo_llm'
temperature: 0.6
plan:
planner:
name: 'discussion_planner'
round: 2
participant:
name:
- 'participant_agent_one'
- 'participant_agent_two'
memory:
name: 'demo_memory'
metadata:
type: 'AGENT'
module: 'sample_standard_app.app.core.agent.discussion_agent_case.host_agent'
class: 'HostAgent'
```
### Run Discussion Group
In the sample project of agentUnvierse `sample_standard_app`, find the `discussion_chat_bots.py` file in the app/examples directory, enter the question you want to answer in the chat method, and run it.

For example, enter the question: Which tastes better, Coca-Cola or Pepsi
```python
from agentuniverse.base.agentuniverse import AgentUniverse
from agentuniverse.agent.agent import Agent
from agentuniverse.agent.agent_manager import AgentManager
AgentUniverse().start(config_path='../../config/config.toml')
def chat(question: str):
instance: Agent = AgentManager().get_instance_obj('host_agent')
instance.run(input=question)
if __name__ == '__main__':
chat("Which tastes better, Coca-Cola or Pepsi")
```

## More Details
### Agent Configuration
The discussion group multi-agent configuration files are located in the app/core/agent/discussion_agent_case directory in the `sample_standard_app` sample project.

The host corresponds to `host_agent.yaml` and the participant corresponds to `participant_agent_*.yaml`.


The prompt is managed by the prompt version. For example, the default prompt_version of host_agent is `host_agent.cn`, and the corresponding prompt file is `host_agent.cn.yaml` under the app/core/prompt directory.

If the LLM Model will be modified to another, such as the Qwen model, change the `llm_model` name information (such as the `qwen_llm` built into the aU system).

The number of discussion rounds in agent plan defaults to 2, which users can adjust as needed. The default participants are two agents built into aU, which can add discussion group members by creating new agents and configuring the agent name parameter to the `participant` of the planner.

```yaml
info:
name: 'host_agent'
description: 'host agent'
profile:
prompt_version: host_agent.cn
llm_model:
name: 'qwen_llm'
temperature: 0.6
plan:
planner:
name: 'discussion_planner'
round: 2
participant:
name:
- 'participant_agent_one'
- 'participant_agent_two'
memory:
name: 'demo_memory'
metadata:
type: 'AGENT'
module: 'sample_standard_app.app.core.agent.discussion_agent_case.host_agent'
class: 'HostAgent'
```

### Memory Configuration
Participants in the aU discussion group configure `demo_memory` in the `sample_standard_app` sample project by default to store common memory information for the entire discussion group.

### Planner Configuration
The planner configuration file is located in the app/core/planner directory of the `sample_standard_app` sample project, where the `discussion_planner.py` file is the specific plan process code of the discussion group. If you are interested, you can read it yourself.

### Prompt Configuration
The prompt is managed by the prompt version, and the default prompt files of the aU discussion group agent are configured in the app/core/prompt directory. Corresponding to the Chinese and English versions (host_agent_xx.yaml/participant_agent_xx.yaml), users can change the prompt_version in the agent configuration to complete fast switching.

### Tool Configuration
Two participants in the aU discussion group are configured with google_search_tool by default. Users can change the `tool name` in the `participant_agent_xxx.yaml` under the app/core/agent/discussion_agent_case directory in the `sample_standard_app` sample project to switch tool calls.
30 changes: 15 additions & 15 deletions docs/guidebook/zh/6_2_1_讨论组.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

共两个智能体角色:讨论组主持人(一名)和讨论组参与者(若干)。

用户发起一个话题,主持人组织各位参与者开始讨论,每轮每个参与者根据讨论话题发表自己的观点(每个参与者的观点并非一成不见,会随着讨论的深入不断调整),几轮讨论后,主持人会对讨论过程进行总结并将参与者们经过几轮讨论后的结果返回给用户。
用户发起一个话题,主持人组织各位参与者开始讨论,每轮每个参与者根据讨论话题发表自己的观点(每个参与者的观点并非一成不变,会随着讨论的深入不断调整),几轮讨论后,主持人会对讨论过程进行总结并将参与者们经过几轮讨论后的结果返回给用户。

三个臭皮匠顶个诸葛亮,想象一下你有个想要解答的问题,直接交给agentUniverse讨论组,主持人将具有不同想法的多个智能体组织在一起,对你的问题进行集中讨论,最终给你一个全面且聚集智慧的答案,这是一个有趣的智能体实验。

Expand All @@ -19,9 +19,9 @@ OPENAI_API_KEY='xxx'
```

### 运行讨论组
在agentUnvierse的sample_standard_app示例工程中,找到app/examples目录下的discussion_chat_bots.py文件,chat方法中输入想要解答的问题,运行即可。
在agentUnvierse的`sample_standard_app`示例工程中,找到app/examples目录下的`discussion_chat_bots.py`文件,chat方法中输入想要解答的问题,运行即可。

例如,输入问题甜粽子好吃还是咸粽子好吃
例如,输入问题甜粽子好吃还是咸粽子好吃
```python
from agentuniverse.base.agentuniverse import AgentUniverse
from agentuniverse.agent.agent import Agent
Expand All @@ -41,15 +41,15 @@ if __name__ == '__main__':

## 更多细节
### 智能体配置
讨论组多智能体配置文件在sample_standard_app示例工程中的app/core/agent/discussion_agent_case目录下。
讨论组多智能体配置文件在`sample_standard_app`示例工程中的app/core/agent/discussion_agent_case目录下。

主持人对应host_agent.yaml,参与者对应participant_agent_*.yaml。
主持人对应`host_agent.yaml`,参与者对应`participant_agent_*.yaml`

prompt通过版本号管理,例如host_agent的默认prompt_version为host_agent.cn,对应prompt文件为app/core/prompt目录下的host_agent.cn.yaml
prompt通过版本号管理,例如host_agent的默认prompt_version为`host_agent.cn`,对应prompt文件为app/core/prompt目录下的`host_agent.cn.yaml`

llm_model若修改为其他模型(如qwen模型),请更改llm_model name信息(如aU系统内置的qwen_llm)。
llm_model若修改为其他模型(如qwen模型),请更改`llm_model` name信息(如aU系统内置的`qwen_llm`)。

agent plan中的round讨论轮数默认为2,用户可以按需调整;默认参与者为aU内置的两个智能体,可以通过建立新的智能体,并配置agent name在participant name中增加讨论组成员
agent plan中的round讨论轮数默认为2,用户可以按需调整;默认参与者为aU内置的两个智能体,可以通过建立新的智能体,并配置agent name在`participant`中增加讨论组成员
```yaml
info:
name: 'host_agent'
Expand All @@ -75,15 +75,15 @@ metadata:
class: 'HostAgent'
```
### memory配置
aU讨论组的参与者默认配置了sample_standard_app示例工程中的demo_memory,用于存储整个讨论组的公共记忆信息。
### Memory配置
aU讨论组的参与者默认配置了`sample_standard_app`示例工程中的`demo_memory`,用于存储整个讨论组的公共记忆信息。

### planner配置
planner配置文件在sample_standard_app示例工程中的app/core/planner目录下,其中discussion_planner.py文件为讨论组的具体plan流程代码,感兴趣可自行阅读。
### Planner配置
planner配置文件在`sample_standard_app`示例工程中的app/core/planner目录下,其中`discussion_planner.py`文件为讨论组的具体plan流程代码,感兴趣可自行阅读。

### prompt配置
### Prompt配置
prompt通过版本号管理,aU讨论组智能体默认prompt文件配置在app/core/prompt目录下,对应中英文两种版本(host_agent_xx.yaml/participant_agent_xx.yaml),用户可以在智能体配置中更改prompt_version完成快速切换。


### tool配置
aU讨论组的两位参与者默认配置了google_search_tool,用户可以更改sample_standard_app示例工程中的app/core/agent/discussion_agent_case目录下participant_agent_xxx.yaml中的tool name,实现工具调用的切换。
### Tool配置
aU讨论组的两位参与者默认配置了google_search_tool,用户可以更改`sample_standard_app`示例工程中的app/core/agent/discussion_agent_case目录下`participant_agent_xxx.yaml`中的`tool name`,实现工具调用的切换。
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ def parse_input(self, input_object: InputObject, agent_input: dict) -> dict:
dict: agent input parsed from `input_object` by the user.
"""
agent_input['input'] = input_object.get_data('input')
agent_input['participants'] = input_object.get_data('participants')
agent_input['total_round'] = input_object.get_data('total_round')
return agent_input

def parse_result(self, planner_result: dict) -> dict:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ def parse_input(self, input_object: InputObject, agent_input: dict) -> dict:
agent_input['agent_name'] = input_object.get_data('agent_name')
agent_input['total_round'] = input_object.get_data('total_round')
agent_input['cur_round'] = input_object.get_data('cur_round')
agent_input['participants'] = input_object.get_data('participants')
return agent_input

def parse_result(self, planner_result: dict) -> dict:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ profile:
prompt_version: demo_executing_agent.cn
llm_model:
name: 'demo_llm'
model_name: 'gpt-4o'
temperature: 0.4
plan:
planner:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ profile:
prompt_version: demo_expressing_agent.cn
llm_model:
name: demo_llm
model_name: gpt-4o
temperature: 0.2
prompt_processor:
type: stuff
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ profile:
prompt_version: demo_planning_agent.cn
llm_model:
name: 'demo_llm'
model_name: 'gpt-4o'
temperature: 0.5
plan:
planner:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ info:
profile:
llm_model:
name: 'demo_llm'
model_name: 'gpt-3.5-turbo-0125'
temperature: 0.5
plan:
planner:
Expand Down
1 change: 0 additions & 1 deletion sample_standard_app/app/core/memory/demo_memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,3 @@ def __init__(self, **kwargs):
llm (LLM): the LLM instance used by this memory.
"""
super().__init__(**kwargs)
self.llm = QWenOpenAIStyleLLM(model_name="qwen-max")
10 changes: 5 additions & 5 deletions sample_standard_app/app/core/planner/discussion_planner.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,8 @@ def agents_run(self, participant_agents: dict, planner_config: dict, agent_model
chat_history = []
LOGGER.info(f"The topic of discussion is {agent_input.get(self.input_key)}")
LOGGER.info(f"The participant agents are {'|'.join(participant_agents.keys())}")
agent_input['total_round'] = total_round
agent_input['participants'] = ' and '.join(participant_agents.keys())
for i in range(total_round):
LOGGER.info("------------------------------------------------------------------")
LOGGER.info(f"Start a discussion, round is {i + 1}.")
Expand All @@ -85,21 +87,19 @@ def agents_run(self, participant_agents: dict, planner_config: dict, agent_model
LOGGER.info("------------------------------------------------------------------")
# invoke participant agent
agent_input['agent_name'] = agent_name
agent_input['total_round'] = total_round
agent_input['cur_round'] = i + 1
output_object: OutputObject = agent.run(**agent_input)
current_output = output_object.get_data('output', '')

# process chat history
chat_history.append({'content': agent_input.get('input'), 'type': 'human'})
chat_history.append(
{'content': f'the round {i + 1} agent {agent_name} thought: {current_output}.', 'type': 'ai'})
{'content': f'the round {i + 1} agent {agent_name} thought: {current_output}', 'type': 'ai'})
agent_input['chat_history'] = chat_history

LOGGER.info(f"the round {i + 1} agent {agent_name} thought: {output_object.get_data('output', '')}.")
LOGGER.info(f"the round {i + 1} agent {agent_name} thought: {output_object.get_data('output', '')}")

agent_input['chat_history'] = chat_history

# finally invoke host agent
return self.invoke_host_agent(agent_model, agent_input)

Expand Down Expand Up @@ -133,7 +133,7 @@ def invoke_host_agent(self, agent_model: AgentModel, planner_input: dict) -> dic
) | StrOutputParser()
res = asyncio.run(
chain_with_history.ainvoke(input=planner_input, config={"configurable": {"session_id": "unused"}}))
LOGGER.info(f"Discussion summary is: {res}.")
LOGGER.info(f"Discussion summary is: {res}")
return {**planner_input, self.output_key: res, 'chat_history': generate_memories(chat_history)}

def handle_prompt(self, agent_model: AgentModel, planner_input: dict) -> ChatPrompt:
Expand Down
4 changes: 3 additions & 1 deletion sample_standard_app/app/core/prompt/host_agent_cn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,10 @@ instruction: |
{chat_history}
开始!
一共{total_round}轮讨论,讨论组的参与者包括: {participants}.
需要回答的问题是: {input}
请用中文回答,需要回答的问题是: {input}
metadata:
type: 'PROMPT'
version: host_agent.cn
8 changes: 5 additions & 3 deletions sample_standard_app/app/core/prompt/host_agent_en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ introduction: You are an AI Q&A assistant proficient in information analysis.
target: Human gives you a question, and your goal is to give human an accurate answer based on the results of multiple rounds of discussion among the participants in the discussion group.
instruction: |
The rules you need to follow are:
1. Before answering the user's questions, you need to summarize in detail the results of several rounds of discussions among the participants in the discussion group.
2. Evaluate the multi-round discussion contributions of each participant in the discussion group.
1. Before answering the user's questions, you need to briefly summarize in detail the results of several rounds of discussions among the participants in the discussion group.
2. Briefly summarize the multi-round discussion process of each participant in the discussion group.
3. Based on the multi-round discussion results from the discussion group, answer the questions posed by users as thoroughly and substantiated as possible.
Today's date is: {date}
Expand All @@ -12,8 +12,10 @@ instruction: |
{chat_history}
Begin!
There are totally {total_round} rounds of discussion, participants in the discussion group include: {participants}.
The question to be answered is: {input}
Please answer in English, the question to be answered is: {input}
metadata:
type: 'PROMPT'
version: host_agent.en
11 changes: 6 additions & 5 deletions sample_standard_app/app/core/prompt/participant_agent_cn.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,6 @@ instruction: |
4. 每一轮你都需要发言一次,尽量坚持自己多轮一直持有的观点,如果发现其他讨论参与者说的更有道理,也可以改变自己最开始的观点。
5. 最后一轮的时候,根据讨论组参与者多轮讨论的对话结果,以及自己多轮发表的观点,回答用户提出的问题,要求尽量详细,有理有据。
你是agent:{agent_name}
一共{total_round}轮讨论,当前是第{cur_round}轮讨论。
背景信息是:
{background}
Expand All @@ -21,8 +17,13 @@ instruction: |
{chat_history}
开始!
你是讨论组参与者,你的名字是:{agent_name},讨论参与者包括:{participants}
一共{total_round}轮讨论,当前是第{cur_round}轮讨论。
需要回答的问题是: {input}
请用中文回答,需要回答的问题是: {input}
metadata:
type: 'PROMPT'
version: participant_agent.cn
10 changes: 5 additions & 5 deletions sample_standard_app/app/core/prompt/participant_agent_en.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,6 @@ instruction: |
4. You need to speak once in each round, try to stick to the views you have held for many rounds, and if you find that what other discussion participants have said is more reasonable, you can also change your original point of view.
5. In the last round, according to the dialogue results of many rounds of discussion among the participants in the discussion group, as well as your own views expressed in many rounds, answer the questions raised by the users and ask them to be as detailed and justified as possible.。
your are the agent: {agent_name}
There are altogether {total_round} rounds of discussion, and this is currently the {cur_round} round of discussions.
Background is:
{background}
Expand All @@ -24,7 +20,11 @@ instruction: |
Begin!
The question to be answered is: {input}
You are the participant of this discussion group, your name is : {agent_name}, participants in the discussion group include: {participants}.
There are totally {total_round} rounds of discussion, and this is currently the {cur_round} round of discussions.
Please answer in English, the question to be answered is: {input}
metadata:
type: 'PROMPT'
version: participant_agent.en

0 comments on commit 4fc1dc5

Please sign in to comment.