diff --git a/agentuniverse/agent/plan/planner/planner.py b/agentuniverse/agent/plan/planner/planner.py index a9401807..7c9def6a 100644 --- a/agentuniverse/agent/plan/planner/planner.py +++ b/agentuniverse/agent/plan/planner/planner.py @@ -74,10 +74,10 @@ def handle_memory(self, agent_model: AgentModel, planner_input: dict) -> ChatMem chat_history: list = planner_input.get('chat_history') memory_name = agent_model.memory.get('name') llm_model = agent_model.memory.get('llm_model') or dict() - llm_name = llm_model.get('name') + llm_name = llm_model.get('name') or agent_model.profile.get('llm_model').get('name') messages: list[Message] = generate_messages(chat_history) - llm: LLM = LLMManager().get_instance_obj(llm_name) if llm_name else None + llm: LLM = LLMManager().get_instance_obj(llm_name) params: dict = dict() params['messages'] = messages params['llm'] = llm diff --git a/docs/guidebook/en/6_2_1_Discussion_Group.md b/docs/guidebook/en/6_2_1_Discussion_Group.md new file mode 100644 index 00000000..e0d19a7d --- /dev/null +++ b/docs/guidebook/en/6_2_1_Discussion_Group.md @@ -0,0 +1,118 @@ +# Discussion: Multi-Agent Discussion Group +In this use case, we will demonstrate how to use agentUniverse for multi-agent discussions. + +There are two agent roles: discussion group host (one) and discussion group participants (several). + +The user initiates a discussion topic, and the host organizes the participants to start the discussion. In each round, each participant expresses his or her own views according to the topic of the discussion (the views of each participant are not immutable and will be constantly adjusted as the discussion goes on). After several rounds of discussion, the host will summarize the discussion process and return the results of the participants after several rounds of discussion to the user. + +Many heads are better than one, imagine that you have a question you want to answer and submit it directly to the agentUniverse discussion group, The host will organize multiple agents with different ideas together, focus on your question, and finally give you a comprehensive and intelligent answer, which is an interesting agent experiment. + +## Quick Start +### Configure API Key +For example, configure key information in the file `custom_ key.toml` in which agentUniverse manages private key configuration (The agentUniverse discussion group uses GPT as the base model and Serper as the Google search tool by default. The following describes how to use other models or tools). +```toml +[KEY_LIST] +# serper google search key +SERPER_API_KEY='xxx' +# openai api key +OPENAI_API_KEY='xxx' +``` +### Modify Prompt Version +Modify the prompt version of agents to the english version, the discussion group multi-agent configuration file is located in the app/core/agent/discussion_agent_case directory in the `sample_standard_app` sample project. Find the host_agent.yaml and change `prompt_version` to `host_agent.en`, similarly find the participants yaml file and change the `prompt_version` to `participant_agent.en`. + +```yaml +info: + name: 'host_agent' + description: 'host agent' +profile: + prompt_version: host_agent.en + llm_model: + name: 'demo_llm' + temperature: 0.6 +plan: + planner: + name: 'discussion_planner' + round: 2 + participant: + name: + - 'participant_agent_one' + - 'participant_agent_two' +memory: + name: 'demo_memory' +metadata: + type: 'AGENT' + module: 'sample_standard_app.app.core.agent.discussion_agent_case.host_agent' + class: 'HostAgent' +``` + + +### Run Discussion Group +In the sample project of agentUnvierse `sample_standard_app`, find the `discussion_chat_bots.py` file in the app/examples directory, enter the question you want to answer in the chat method, and run it. + +For example, enter the question: Which tastes better, Coca-Cola or Pepsi +```python +from agentuniverse.base.agentuniverse import AgentUniverse +from agentuniverse.agent.agent import Agent +from agentuniverse.agent.agent_manager import AgentManager + +AgentUniverse().start(config_path='../../config/config.toml') + + +def chat(question: str): + instance: Agent = AgentManager().get_instance_obj('host_agent') + instance.run(input=question) + + +if __name__ == '__main__': + chat("Which tastes better, Coca-Cola or Pepsi") +``` + +## More Details +### Agent Configuration +The discussion group multi-agent configuration files are located in the app/core/agent/discussion_agent_case directory in the `sample_standard_app` sample project. + +The host corresponds to `host_agent.yaml` and the participant corresponds to `participant_agent_*.yaml`. + + +The prompt is managed by the prompt version. For example, the default prompt_version of host_agent is `host_agent.cn`, and the corresponding prompt file is `host_agent.cn.yaml` under the app/core/prompt directory. + +If the LLM Model will be modified to another, such as the Qwen model, change the `llm_model` name information (such as the `qwen_llm` built into the aU system). + +The number of discussion rounds in agent plan defaults to 2, which users can adjust as needed. The default participants are two agents built into aU, which can add discussion group members by creating new agents and configuring the agent name parameter to the `participant` of the planner. + +```yaml +info: + name: 'host_agent' + description: 'host agent' +profile: + prompt_version: host_agent.cn + llm_model: + name: 'qwen_llm' + temperature: 0.6 +plan: + planner: + name: 'discussion_planner' + round: 2 + participant: + name: + - 'participant_agent_one' + - 'participant_agent_two' +memory: + name: 'demo_memory' +metadata: + type: 'AGENT' + module: 'sample_standard_app.app.core.agent.discussion_agent_case.host_agent' + class: 'HostAgent' +``` + +### Memory Configuration +Participants in the aU discussion group configure `demo_memory` in the `sample_standard_app` sample project by default to store common memory information for the entire discussion group. + +### Planner Configuration +The planner configuration file is located in the app/core/planner directory of the `sample_standard_app` sample project, where the `discussion_planner.py` file is the specific plan process code of the discussion group. If you are interested, you can read it yourself. + +### Prompt Configuration +The prompt is managed by the prompt version, and the default prompt files of the aU discussion group agent are configured in the app/core/prompt directory. Corresponding to the Chinese and English versions (host_agent_xx.yaml/participant_agent_xx.yaml), users can change the prompt_version in the agent configuration to complete fast switching. + +### Tool Configuration +Two participants in the aU discussion group are configured with google_search_tool by default. Users can change the `tool name` in the `participant_agent_xxx.yaml` under the app/core/agent/discussion_agent_case directory in the `sample_standard_app` sample project to switch tool calls. diff --git "a/docs/guidebook/zh/6_2_1_\350\256\250\350\256\272\347\273\204.md" "b/docs/guidebook/zh/6_2_1_\350\256\250\350\256\272\347\273\204.md" index 9012d969..a3289267 100644 --- "a/docs/guidebook/zh/6_2_1_\350\256\250\350\256\272\347\273\204.md" +++ "b/docs/guidebook/zh/6_2_1_\350\256\250\350\256\272\347\273\204.md" @@ -3,7 +3,7 @@ 共两个智能体角色:讨论组主持人(一名)和讨论组参与者(若干)。 -用户发起一个话题,主持人组织各位参与者开始讨论,每轮每个参与者根据讨论话题发表自己的观点(每个参与者的观点并非一成不见,会随着讨论的深入不断调整),几轮讨论后,主持人会对讨论过程进行总结并将参与者们经过几轮讨论后的结果返回给用户。 +用户发起一个话题,主持人组织各位参与者开始讨论,每轮每个参与者根据讨论话题发表自己的观点(每个参与者的观点并非一成不变,会随着讨论的深入不断调整),几轮讨论后,主持人会对讨论过程进行总结并将参与者们经过几轮讨论后的结果返回给用户。 三个臭皮匠顶个诸葛亮,想象一下你有个想要解答的问题,直接交给agentUniverse讨论组,主持人将具有不同想法的多个智能体组织在一起,对你的问题进行集中讨论,最终给你一个全面且聚集智慧的答案,这是一个有趣的智能体实验。 @@ -19,9 +19,9 @@ OPENAI_API_KEY='xxx' ``` ### 运行讨论组 -在agentUnvierse的sample_standard_app示例工程中,找到app/examples目录下的discussion_chat_bots.py文件,chat方法中输入想要解答的问题,运行即可。 +在agentUnvierse的`sample_standard_app`示例工程中,找到app/examples目录下的`discussion_chat_bots.py`文件,chat方法中输入想要解答的问题,运行即可。 -例如,输入问题甜粽子好吃还是咸粽子好吃: +例如,输入问题甜粽子好吃还是咸粽子好吃 ```python from agentuniverse.base.agentuniverse import AgentUniverse from agentuniverse.agent.agent import Agent @@ -41,15 +41,15 @@ if __name__ == '__main__': ## 更多细节 ### 智能体配置 -讨论组多智能体配置文件在sample_standard_app示例工程中的app/core/agent/discussion_agent_case目录下。 +讨论组多智能体配置文件在`sample_standard_app`示例工程中的app/core/agent/discussion_agent_case目录下。 -主持人对应host_agent.yaml,参与者对应participant_agent_*.yaml。 +主持人对应`host_agent.yaml`,参与者对应`participant_agent_*.yaml`。 -prompt通过版本号管理,例如host_agent的默认prompt_version为host_agent.cn,对应prompt文件为app/core/prompt目录下的host_agent.cn.yaml +prompt通过版本号管理,例如host_agent的默认prompt_version为`host_agent.cn`,对应prompt文件为app/core/prompt目录下的`host_agent.cn.yaml` -llm_model若修改为其他模型(如qwen模型),请更改llm_model name信息(如aU系统内置的qwen_llm)。 +llm_model若修改为其他模型(如qwen模型),请更改`llm_model` name信息(如aU系统内置的`qwen_llm`)。 -agent plan中的round讨论轮数默认为2,用户可以按需调整;默认参与者为aU内置的两个智能体,可以通过建立新的智能体,并配置agent name在participant name中增加讨论组成员。 +agent plan中的round讨论轮数默认为2,用户可以按需调整;默认参与者为aU内置的两个智能体,可以通过建立新的智能体,并配置agent name在`participant`中增加讨论组成员。 ```yaml info: name: 'host_agent' @@ -75,15 +75,15 @@ metadata: class: 'HostAgent' ``` -### memory配置 -aU讨论组的参与者默认配置了sample_standard_app示例工程中的demo_memory,用于存储整个讨论组的公共记忆信息。 +### Memory配置 +aU讨论组的参与者默认配置了`sample_standard_app`示例工程中的`demo_memory`,用于存储整个讨论组的公共记忆信息。 -### planner配置 -planner配置文件在sample_standard_app示例工程中的app/core/planner目录下,其中discussion_planner.py文件为讨论组的具体plan流程代码,感兴趣可自行阅读。 +### Planner配置 +planner配置文件在`sample_standard_app`示例工程中的app/core/planner目录下,其中`discussion_planner.py`文件为讨论组的具体plan流程代码,感兴趣可自行阅读。 -### prompt配置 +### Prompt配置 prompt通过版本号管理,aU讨论组智能体默认prompt文件配置在app/core/prompt目录下,对应中英文两种版本(host_agent_xx.yaml/participant_agent_xx.yaml),用户可以在智能体配置中更改prompt_version完成快速切换。 -### tool配置 -aU讨论组的两位参与者默认配置了google_search_tool,用户可以更改sample_standard_app示例工程中的app/core/agent/discussion_agent_case目录下participant_agent_xxx.yaml中的tool name,实现工具调用的切换。 +### Tool配置 +aU讨论组的两位参与者默认配置了google_search_tool,用户可以更改`sample_standard_app`示例工程中的app/core/agent/discussion_agent_case目录下`participant_agent_xxx.yaml`中的`tool name`,实现工具调用的切换。 diff --git a/sample_standard_app/app/core/agent/discussion_agent_case/host_agent.py b/sample_standard_app/app/core/agent/discussion_agent_case/host_agent.py index 13790cb4..5abf97cd 100644 --- a/sample_standard_app/app/core/agent/discussion_agent_case/host_agent.py +++ b/sample_standard_app/app/core/agent/discussion_agent_case/host_agent.py @@ -29,6 +29,8 @@ def parse_input(self, input_object: InputObject, agent_input: dict) -> dict: dict: agent input parsed from `input_object` by the user. """ agent_input['input'] = input_object.get_data('input') + agent_input['participants'] = input_object.get_data('participants') + agent_input['total_round'] = input_object.get_data('total_round') return agent_input def parse_result(self, planner_result: dict) -> dict: diff --git a/sample_standard_app/app/core/agent/discussion_agent_case/participant_agent.py b/sample_standard_app/app/core/agent/discussion_agent_case/participant_agent.py index 4458629f..c1f817cc 100644 --- a/sample_standard_app/app/core/agent/discussion_agent_case/participant_agent.py +++ b/sample_standard_app/app/core/agent/discussion_agent_case/participant_agent.py @@ -21,6 +21,7 @@ def parse_input(self, input_object: InputObject, agent_input: dict) -> dict: agent_input['agent_name'] = input_object.get_data('agent_name') agent_input['total_round'] = input_object.get_data('total_round') agent_input['cur_round'] = input_object.get_data('cur_round') + agent_input['participants'] = input_object.get_data('participants') return agent_input def parse_result(self, planner_result: dict) -> dict: diff --git a/sample_standard_app/app/core/agent/peer_agent_case/demo_executing_agent.yaml b/sample_standard_app/app/core/agent/peer_agent_case/demo_executing_agent.yaml index fa2e29f7..91daba1d 100644 --- a/sample_standard_app/app/core/agent/peer_agent_case/demo_executing_agent.yaml +++ b/sample_standard_app/app/core/agent/peer_agent_case/demo_executing_agent.yaml @@ -5,7 +5,6 @@ profile: prompt_version: demo_executing_agent.cn llm_model: name: 'demo_llm' - model_name: 'gpt-4o' temperature: 0.4 plan: planner: diff --git a/sample_standard_app/app/core/agent/peer_agent_case/demo_expressing_agent.yaml b/sample_standard_app/app/core/agent/peer_agent_case/demo_expressing_agent.yaml index b5189daf..64ca635d 100644 --- a/sample_standard_app/app/core/agent/peer_agent_case/demo_expressing_agent.yaml +++ b/sample_standard_app/app/core/agent/peer_agent_case/demo_expressing_agent.yaml @@ -5,7 +5,6 @@ profile: prompt_version: demo_expressing_agent.cn llm_model: name: demo_llm - model_name: gpt-4o temperature: 0.2 prompt_processor: type: stuff diff --git a/sample_standard_app/app/core/agent/peer_agent_case/demo_planning_agent.yaml b/sample_standard_app/app/core/agent/peer_agent_case/demo_planning_agent.yaml index 278b6fff..b50071da 100644 --- a/sample_standard_app/app/core/agent/peer_agent_case/demo_planning_agent.yaml +++ b/sample_standard_app/app/core/agent/peer_agent_case/demo_planning_agent.yaml @@ -5,7 +5,6 @@ profile: prompt_version: demo_planning_agent.cn llm_model: name: 'demo_llm' - model_name: 'gpt-4o' temperature: 0.5 plan: planner: diff --git a/sample_standard_app/app/core/agent/peer_agent_case/demo_reviewing_agent.yaml b/sample_standard_app/app/core/agent/peer_agent_case/demo_reviewing_agent.yaml index 38f45076..b1280a14 100644 --- a/sample_standard_app/app/core/agent/peer_agent_case/demo_reviewing_agent.yaml +++ b/sample_standard_app/app/core/agent/peer_agent_case/demo_reviewing_agent.yaml @@ -4,7 +4,6 @@ info: profile: llm_model: name: 'demo_llm' - model_name: 'gpt-3.5-turbo-0125' temperature: 0.5 plan: planner: diff --git a/sample_standard_app/app/core/memory/demo_memory.py b/sample_standard_app/app/core/memory/demo_memory.py index 00b0fb66..1a893132 100644 --- a/sample_standard_app/app/core/memory/demo_memory.py +++ b/sample_standard_app/app/core/memory/demo_memory.py @@ -23,4 +23,3 @@ def __init__(self, **kwargs): llm (LLM): the LLM instance used by this memory. """ super().__init__(**kwargs) - self.llm = QWenOpenAIStyleLLM(model_name="qwen-max") diff --git a/sample_standard_app/app/core/planner/discussion_planner.py b/sample_standard_app/app/core/planner/discussion_planner.py index bc9e3720..e4518923 100644 --- a/sample_standard_app/app/core/planner/discussion_planner.py +++ b/sample_standard_app/app/core/planner/discussion_planner.py @@ -76,6 +76,8 @@ def agents_run(self, participant_agents: dict, planner_config: dict, agent_model chat_history = [] LOGGER.info(f"The topic of discussion is {agent_input.get(self.input_key)}") LOGGER.info(f"The participant agents are {'|'.join(participant_agents.keys())}") + agent_input['total_round'] = total_round + agent_input['participants'] = ' and '.join(participant_agents.keys()) for i in range(total_round): LOGGER.info("------------------------------------------------------------------") LOGGER.info(f"Start a discussion, round is {i + 1}.") @@ -85,7 +87,6 @@ def agents_run(self, participant_agents: dict, planner_config: dict, agent_model LOGGER.info("------------------------------------------------------------------") # invoke participant agent agent_input['agent_name'] = agent_name - agent_input['total_round'] = total_round agent_input['cur_round'] = i + 1 output_object: OutputObject = agent.run(**agent_input) current_output = output_object.get_data('output', '') @@ -93,13 +94,12 @@ def agents_run(self, participant_agents: dict, planner_config: dict, agent_model # process chat history chat_history.append({'content': agent_input.get('input'), 'type': 'human'}) chat_history.append( - {'content': f'the round {i + 1} agent {agent_name} thought: {current_output}.', 'type': 'ai'}) + {'content': f'the round {i + 1} agent {agent_name} thought: {current_output}', 'type': 'ai'}) agent_input['chat_history'] = chat_history - LOGGER.info(f"the round {i + 1} agent {agent_name} thought: {output_object.get_data('output', '')}.") + LOGGER.info(f"the round {i + 1} agent {agent_name} thought: {output_object.get_data('output', '')}") agent_input['chat_history'] = chat_history - # finally invoke host agent return self.invoke_host_agent(agent_model, agent_input) @@ -133,7 +133,7 @@ def invoke_host_agent(self, agent_model: AgentModel, planner_input: dict) -> dic ) | StrOutputParser() res = asyncio.run( chain_with_history.ainvoke(input=planner_input, config={"configurable": {"session_id": "unused"}})) - LOGGER.info(f"Discussion summary is: {res}.") + LOGGER.info(f"Discussion summary is: {res}") return {**planner_input, self.output_key: res, 'chat_history': generate_memories(chat_history)} def handle_prompt(self, agent_model: AgentModel, planner_input: dict) -> ChatPrompt: diff --git a/sample_standard_app/app/core/prompt/host_agent_cn.yaml b/sample_standard_app/app/core/prompt/host_agent_cn.yaml index c4a9145a..b59d7573 100644 --- a/sample_standard_app/app/core/prompt/host_agent_cn.yaml +++ b/sample_standard_app/app/core/prompt/host_agent_cn.yaml @@ -12,8 +12,10 @@ instruction: | {chat_history} 开始! + + 一共{total_round}轮讨论,讨论组的参与者包括: {participants}. - 需要回答的问题是: {input} + 请用中文回答,需要回答的问题是: {input} metadata: type: 'PROMPT' version: host_agent.cn diff --git a/sample_standard_app/app/core/prompt/host_agent_en.yaml b/sample_standard_app/app/core/prompt/host_agent_en.yaml index fcd46353..9c0ec947 100644 --- a/sample_standard_app/app/core/prompt/host_agent_en.yaml +++ b/sample_standard_app/app/core/prompt/host_agent_en.yaml @@ -2,8 +2,8 @@ introduction: You are an AI Q&A assistant proficient in information analysis. target: Human gives you a question, and your goal is to give human an accurate answer based on the results of multiple rounds of discussion among the participants in the discussion group. instruction: | The rules you need to follow are: - 1. Before answering the user's questions, you need to summarize in detail the results of several rounds of discussions among the participants in the discussion group. - 2. Evaluate the multi-round discussion contributions of each participant in the discussion group. + 1. Before answering the user's questions, you need to briefly summarize in detail the results of several rounds of discussions among the participants in the discussion group. + 2. Briefly summarize the multi-round discussion process of each participant in the discussion group. 3. Based on the multi-round discussion results from the discussion group, answer the questions posed by users as thoroughly and substantiated as possible. Today's date is: {date} @@ -12,8 +12,10 @@ instruction: | {chat_history} Begin! + + There are totally {total_round} rounds of discussion, participants in the discussion group include: {participants}. - The question to be answered is: {input} + Please answer in English, the question to be answered is: {input} metadata: type: 'PROMPT' version: host_agent.en diff --git a/sample_standard_app/app/core/prompt/participant_agent_cn.yaml b/sample_standard_app/app/core/prompt/participant_agent_cn.yaml index e3f573d5..63cb3a25 100644 --- a/sample_standard_app/app/core/prompt/participant_agent_cn.yaml +++ b/sample_standard_app/app/core/prompt/participant_agent_cn.yaml @@ -8,10 +8,6 @@ instruction: | 4. 每一轮你都需要发言一次,尽量坚持自己多轮一直持有的观点,如果发现其他讨论参与者说的更有道理,也可以改变自己最开始的观点。 5. 最后一轮的时候,根据讨论组参与者多轮讨论的对话结果,以及自己多轮发表的观点,回答用户提出的问题,要求尽量详细,有理有据。 - 你是agent:{agent_name} - - 一共{total_round}轮讨论,当前是第{cur_round}轮讨论。 - 背景信息是: {background} @@ -21,8 +17,13 @@ instruction: | {chat_history} 开始! + + 你是讨论组参与者,你的名字是:{agent_name},讨论参与者包括:{participants} + + 一共{total_round}轮讨论,当前是第{cur_round}轮讨论。 + - 需要回答的问题是: {input} + 请用中文回答,需要回答的问题是: {input} metadata: type: 'PROMPT' version: participant_agent.cn diff --git a/sample_standard_app/app/core/prompt/participant_agent_en.yaml b/sample_standard_app/app/core/prompt/participant_agent_en.yaml index 7aafaf15..b5fe2d88 100644 --- a/sample_standard_app/app/core/prompt/participant_agent_en.yaml +++ b/sample_standard_app/app/core/prompt/participant_agent_en.yaml @@ -10,10 +10,6 @@ instruction: | 4. You need to speak once in each round, try to stick to the views you have held for many rounds, and if you find that what other discussion participants have said is more reasonable, you can also change your original point of view. 5. In the last round, according to the dialogue results of many rounds of discussion among the participants in the discussion group, as well as your own views expressed in many rounds, answer the questions raised by the users and ask them to be as detailed and justified as possible.。 - your are the agent: {agent_name} - - There are altogether {total_round} rounds of discussion, and this is currently the {cur_round} round of discussions. - Background is: {background} @@ -24,7 +20,11 @@ instruction: | Begin! - The question to be answered is: {input} + You are the participant of this discussion group, your name is : {agent_name}, participants in the discussion group include: {participants}. + + There are totally {total_round} rounds of discussion, and this is currently the {cur_round} round of discussions. + + Please answer in English, the question to be answered is: {input} metadata: type: 'PROMPT' version: participant_agent.en