Skip to content

Commit

Permalink
Merge pull request #88 from alipay/dev
Browse files Browse the repository at this point in the history
feat: Release version 0.0.9
  • Loading branch information
LandJerry authored Jun 14, 2024
2 parents 8429b84 + 39c40f0 commit 5a78a21
Show file tree
Hide file tree
Showing 137 changed files with 3,169 additions and 60 deletions.
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,21 @@ Note - Additional remarks regarding the version.
***************************************************

# Version Update History
## [0.0.9] - 2024-06-14
### Added
- Added standard integration for Claude and Ollama LLM components
- Added new Qwen embedding module
- Added default agents for ReAct-Type and NL2API-Type

### Note
- Added new use cases
- RAG-Type Agent Examples: Legal Consultation Agent
- ReAct-Type Agent Examples: Python Code Generation and Execution Agent
- Multi-Agent: Discussion Group Based on Multi-Turn Multi-Agent Mode

For more details, please refer to the use case section in the user documentation.
- Some code optimizations and documentation updates.

## [0.0.8] - 2024-06-06
### Added
- Introduced a new monitor module
Expand Down
15 changes: 15 additions & 0 deletions CHANGELOG_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,21 @@ Note - 对于版本的额外说明。
***************************************************

# 版本更新记录
## [0.0.9] - 2024-06-14
### Added
- LLM组件新增claude、ollama标准接入
- 新增qwen embedding模块
- 新增ReAct、nl2api默认agent

### Note
- 新增使用案例
- RAG类Agent案例-法律咨询Agent
- ReAct类Agent案例-Python代码生成与执行Agent
- 多智能体案例-基于多轮多Agent的讨论小组

详情请看用户文档案例部分。
- 部分代码优化与文档更新

## [0.0.8] - 2024-06-06
### Added
- 新增monitor模块
Expand Down
7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Language version: [English](./README.md) | [中文](./README_zh.md) | [日本語
![](https://img.shields.io/badge/framework-agentUniverse-pink)
![](https://img.shields.io/badge/python-3.10%2B-blue?logo=Python)
[![](https://img.shields.io/badge/%20license-Apache--2.0-yellow)](LICENSE)
[![Static Badge](https://img.shields.io/badge/pypi-v0.0.8-blue?logo=pypi)](https://pypi.org/project/agentUniverse/)
[![Static Badge](https://img.shields.io/badge/pypi-v0.0.9-blue?logo=pypi)](https://pypi.org/project/agentUniverse/)

![](docs/guidebook/_picture/logo_bar.jpg)
****************************************
Expand Down Expand Up @@ -44,6 +44,11 @@ We will show you how to:
* Quickly serve the agent
For details, please read [Quick Start](docs/guidebook/en/1_3_Quick_Start.md).

## Use Cases
[Legal Consultation Agent](./docs/guidebook/en/7_1_1_Legal_Consultation_Case.md)
[Python Code Generation and Execution Agent](./docs/guidebook/en/7_1_1_Python_Auto_Runner.md)
[Discussion Group Based on Multi-Turn Multi-Agent Mode](./docs/guidebook/en/6_2_1_Discussion_Group.md)

## Guidebook
For more detailed information, please refer to the [Guidebook](docs/guidebook/en/0_index.md).

Expand Down
7 changes: 6 additions & 1 deletion README_jp.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
![](https://img.shields.io/badge/framework-agentUniverse-pink)
![](https://img.shields.io/badge/python-3.10%2B-blue?logo=Python)
[![](https://img.shields.io/badge/%20license-Apache--2.0-yellow)](LICENSE)
[![Static Badge](https://img.shields.io/badge/pypi-v0.0.8-blue?logo=pypi)](https://pypi.org/project/agentUniverse/)
[![Static Badge](https://img.shields.io/badge/pypi-v0.0.9-blue?logo=pypi)](https://pypi.org/project/agentUniverse/)

![](docs/guidebook/_picture/logo_bar.jpg)
****************************************
Expand Down Expand Up @@ -43,6 +43,11 @@ pip install agentUniverse
* エージェントを迅速に提供する
詳細は、[クイックスタート](docs/guidebook/en/1_3_Quick_Start.md)をご覧ください。

## 使用ケース
[法律相談Agent](./docs/guidebook/en/7_1_1_Legal_Consultation_Case.md)
[Pythonコード生成と実行Agent](./docs/guidebook/en/7_1_1_Python_Auto_Runner.md)
[多回多Agentによるディスカッショングループ](./docs/guidebook/en/6_2_1_Discussion_Group.md)

## ガイドブック
詳細情報については、[ガイドブック](docs/guidebook/en/0_index.md)を参照してください。

Expand Down
7 changes: 6 additions & 1 deletion README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
![](https://img.shields.io/badge/framework-agentUniverse-pink)
![](https://img.shields.io/badge/python-3.10%2B-blue?logo=Python)
[![](https://img.shields.io/badge/%20license-Apache--2.0-yellow)](LICENSE)
[![Static Badge](https://img.shields.io/badge/pypi-v0.0.8-blue?logo=pypi)](https://pypi.org/project/agentUniverse/)
[![Static Badge](https://img.shields.io/badge/pypi-v0.0.9-blue?logo=pypi)](https://pypi.org/project/agentUniverse/)

![](docs/guidebook/_picture/logo_bar.jpg)
****************************************
Expand Down Expand Up @@ -47,6 +47,11 @@ pip install agentUniverse

详情请阅读[快速开始](docs/guidebook/zh/1_3_%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B.md)

## 使用案例
[法律咨询Agent](./docs/guidebook/zh/7_1_1_法律咨询案例.md)
[Python代码生成与执行Agent](./docs/guidebook/zh/7_1_1_Python自动执行案例.md)
[基于多轮多Agent的讨论小组](./docs/guidebook/zh/6_2_1_讨论组.md)

## 用户指南
更多详细信息,请参阅[指南](docs/guidebook/zh/0_%E7%9B%AE%E5%BD%95.md)

Expand Down
156 changes: 156 additions & 0 deletions agentuniverse/agent/action/knowledge/embedding/dashscope_embedding.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
# !/usr/bin/env python3
# -*- coding:utf-8 -*-

# @Time : 2024/6/12 11:43
# @Author : wangchongshi
# @Email : wangchongshi.wcs@antgroup.com
# @FileName: dashscope_embedding.py
import aiohttp
import requests
from typing import List, Generator, Optional
import json

from agentuniverse.base.util.env_util import get_from_env
from agentuniverse.agent.action.knowledge.embedding.embedding import Embedding

# Dashscope support max 25 string in one batch, each string max tokens is 2048.
DASHSCOPE_MAX_BATCH_SIZE = 25
DASHSCOPE_EMBEDDING_URL = "https://dashscope.aliyuncs.com/api/v1/services/embeddings/text-embedding/text-embedding"


def batched(inputs: List,
batch_size: int = DASHSCOPE_MAX_BATCH_SIZE) -> Generator[List, None, None]:
# Split input string list, due to dashscope support 25 strings in one call.
for i in range(0, len(inputs), batch_size):
yield inputs[i:i + batch_size]


class DashscopeEmbedding(Embedding):
"""The Dashscope embedding class."""
dashscope_api_key: Optional[str] = None

def __init__(self, **kwargs):
"""Initialize the dashscope embedding class, need dashscope api key."""
super().__init__(**kwargs)
self.dashscope_api_key = get_from_env("DASHSCOPE_API_KEY")
if not self.dashscope_api_key:
raise Exception("No DASHSCOPE_API_KEY in your environment.")


def get_embeddings(self, texts: List[str]) -> List[List[float]]:
"""
Retrieve text embeddings for a list of input texts.
This function interfaces with the DashScope embedding API to obtain
embeddings for a batch of input texts. It handles batching of input texts
to ensure efficient API calls. Each text is processed using the specified
embedding model.
Args:
texts (List[str]): A list of input texts to be embedded.
Returns:
List[List[float]]: A list of embeddings corresponding to the input texts.
Raises:
Exception: If the API call to DashScope fails, an exception is raised with
the respective error code and message.
"""
def post(post_params):
response = requests.post(
url=DASHSCOPE_EMBEDDING_URL,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {self.dashscope_api_key}"
},
data=json.dumps(post_params, ensure_ascii=False).encode(
"utf-8"),
timeout=120
)
resp_json = response.json()
return resp_json

result = []
post_params = {
"model": self.embedding_model_name,
"input": {},
"parameters": {
"text_type": "query"
}
}

for batch in batched(texts):
post_params["input"]["texts"] = batch
resp_json: dict = post(post_params)
data = resp_json.get("output")
if data:
data = data["embeddings"]
batch_result = [d['embedding'] for d in data if 'embedding' in d]
result += batch_result
else:
error_code = resp_json.get("code", "")
error_message = resp_json.get("message", "")
raise Exception(f"Failed to call dashscope embedding api, "
f"error code:{error_code}, "
f"error message:{error_message}")
return result

async def async_get_embeddings(self, texts: List[str]) -> List[List[float]]:
"""
Async version of get_embeddings.
This function interfaces with the DashScope embedding API to obtain
embeddings for a batch of input texts. It handles batching of input texts
to ensure efficient API calls. Each text is processed using the specified
embedding model.
Args:
texts (List[str]): A list of input texts to be embedded.
Returns:
List[List[float]]: A list of embeddings corresponding to the input texts.
Raises:
Exception: If the API call to DashScope fails, an exception is raised with
the respective error code and message.
"""
async def async_post(post_params):
async with aiohttp.ClientSession() as session:
async with await session.post(
url=DASHSCOPE_EMBEDDING_URL,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {self.dashscope_api_key}"
},
data=json.dumps(post_params, ensure_ascii=False).encode(
"utf-8"),
timeout=120,
) as resp:
resp_json = await resp.json()
return resp_json

result = []
post_params = {
"model": self.embedding_model_name,
"input": {},
"parameters": {
"text_type": "query"
}
}

for batch in batched(texts):
post_params["input"]["texts"] = batch
resp_json: dict = await async_post(post_params)
data = resp_json.get("output")
if data:
data = data["embeddings"]
batch_result = [d['embedding'] for d in data if
'embedding' in d]
result += batch_result
else:
error_code = resp_json.get("code", "")
error_message = resp_json.get("message", "")
raise Exception(f"Failed to call dashscope embedding api, "
f"error code:{error_code}, "
f"error message:{error_message}")
return result
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
# @Author : wangchongshi
# @Email : wangchongshi.wcs@antgroup.com
# @FileName: openai_embedding.py

from typing import List, Optional, Any

from langchain_community.embeddings.openai import OpenAIEmbeddings
Expand Down
20 changes: 19 additions & 1 deletion agentuniverse/agent/action/tool/tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,14 +73,32 @@ def input_check(self, kwargs: dict) -> None:
if key not in kwargs.keys():
raise Exception(f'{self.get_instance_code()} - The input must include key: {key}.')

def langchain_run(self, *args, callbacks=None, **kwargs):
"""The callable method that runs the tool."""
kwargs["callbacks"] = callbacks
tool_input = ToolInput(kwargs)
parse_result = self.parse_react_input(args[0])
for key in self.input_keys:
tool_input.add_data(key, parse_result[key])
return self.execute(tool_input)

def parse_react_input(self, input_str: str):
"""
parse react string to you input
you can define your own logic here by override this function
"""
return {
self.input_keys[0]: input_str
}

@abstractmethod
def execute(self, tool_input: ToolInput):
raise NotImplementedError

def as_langchain(self) -> LangchainTool:
"""Convert the agentUniverse(aU) tool class to the langchain tool class."""
return LangchainTool(name=self.name,
func=self.run,
func=self.langchain_run,
description=self.description)

def get_instance_code(self) -> str:
Expand Down
Empty file.
27 changes: 27 additions & 0 deletions agentuniverse/agent/default/nl2api_agent/default_cn_prompt.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
introduction: 你是一位精通工具选择ai助手。
target: 你的目标是根据用户的问题选择出合适的工具。
instruction: |
你需要根据问题和用户提供的工具,选择其中的一个或几个工具用来回答用户提出的问题。
你必须从多个角度、维度分析用户的问题,需要根据背景和问题,决定使用哪些工具可以回答用户问题。
您可以使用以下工具:
{tools}
之前的对话:
{chat_history}
背景信息是:
{background}
回答必须是按照以下格式化的Json代码片段。
1. tools字段代表选择的几个工具的完整名称,列表格式。例如:[add, sub, mul, div]
2. thought字段代表选择工具的思考过程和原因。
```{{
"tools": list,
"thought": string
}}```
当前的问题:{input}
metadata:
type: 'PROMPT'
version: 'default_nl2api_agent.cn'
26 changes: 26 additions & 0 deletions agentuniverse/agent/default/nl2api_agent/default_en_prompt.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
introduction: You are an AI assistant proficient in tool selection.
target: Your goal is to select the appropriate tools based on the user's questions.
instruction: |
Your task is to select one or several tools from those provided by the user, based on their question and the context, in order to answer the user's query.
You must analyze the user's problem from multiple angles and dimensions, taking into account the background and context of the question, and decide which tools can be used to answer the user's question.
You may use the following tools:
{tools}
Previous conversation:
{chat_history}
The background information is:
{background}
The response must follow the format below as a formatted JSON code snippet.
1. The tools field represents the full names of the selected tools in a list format, such as:[add, sub, mul, div]
2. The thought field represents the thinking process and reasons behind the selection of tools.
```{{
"tools": list,
"thought": string
}}```
Question: {input}
metadata:
type: 'PROMPT'
version: 'default_nl2api_agent.en'
Loading

0 comments on commit 5a78a21

Please sign in to comment.