Skip to content

Commit

Permalink
DOC: Chinese doc for example part (#838)
Browse files Browse the repository at this point in the history
  • Loading branch information
ChengjieLi28 authored Dec 28, 2023
1 parent f8c1bef commit f06eaa8
Show file tree
Hide file tree
Showing 6 changed files with 100 additions and 92 deletions.
51 changes: 29 additions & 22 deletions doc/source/locale/zh_CN/LC_MESSAGES/examples/ai_podcast.po
Original file line number Diff line number Diff line change
Expand Up @@ -25,59 +25,63 @@ msgstr "示例:智能播客 🎙"

#: ../../source/examples/ai_podcast.rst:7
msgid "**Description**:"
msgstr ""
msgstr "描述"

#: ../../source/examples/ai_podcast.rst:9
msgid "🎙️AI Podcast - Voice Conversations with Multiple Agents on M2 Max 💻"
msgstr ""
msgstr "🎙️AI播客 - 在M2 Max芯片上进行多智能体语音对话"

#: ../../source/examples/ai_podcast.rst:11
msgid "**Support Language** :"
msgstr ""
msgstr "**支持语言**:"

#: ../../source/examples/ai_podcast.rst:13
msgid "English (AI_Podcast.py)"
msgstr ""
msgstr "英文对应代码文件:AI_Podcast.py"

#: ../../source/examples/ai_podcast.rst:15
msgid "Chinese (AI_Podcast_ZH.py)"
msgstr ""
msgstr "中文对应代码文件:AI_Podcast_ZH.py"

#: ../../source/examples/ai_podcast.rst:17
msgid "**Used Technology (EN version)** :"
msgstr ""
msgstr "英文版本涉及技术:"

#: ../../source/examples/ai_podcast.rst:19
msgid ""
"@ `OpenAI <https://twitter.com/OpenAI>`_ 's `whisper "
"<https://pypi.org/project/openai-whisper/>`_"
msgstr ""
"@ `OpenAI <https://twitter.com/OpenAI>`_ `whisper <https://pypi.org/project/openai-whisper/>`_"

#: ../../source/examples/ai_podcast.rst:21
msgid ""
"@ `ggerganov <https://twitter.com/ggerganov>`_ 's `ggml "
"<https://github.com/ggerganov/ggml>`_"
msgstr ""
"@ `ggerganov <https://twitter.com/ggerganov>`_ `ggml <https://github.com/ggerganov/ggml>`_"

#: ../../source/examples/ai_podcast.rst:23
msgid ""
"@ `WizardLM_AI <https://twitter.com/WizardLM_AI>`_ 's `wizardlm v1.0 "
"<https://huggingface.co/WizardLM>`_"
msgstr ""
"@ `WizardLM_AI <https://twitter.com/WizardLM_AI>`_ `wizardlm v1.0 <https://huggingface.co/WizardLM>`_"

#: ../../source/examples/ai_podcast.rst:25
#: ../../source/examples/ai_podcast.rst:25/
msgid ""
"@ `lmsysorg <https://twitter.com/lmsysorg>`_ 's `vicuna v1.3 "
"<https://huggingface.co/lmsys/vicuna-7b-v1.3>`_"
msgstr ""
"@ `lmsysorg <https://twitter.com/lmsysorg>`_ `vicuna v1.3 <https://huggingface.co/lmsys/vicuna-7b-v1.3>`_"

#: ../../source/examples/ai_podcast.rst:27
msgid "@ `Xinference <https://github.com/xorbitsai/inference>`_ as a launcher"
msgstr ""
msgstr "`Xinference <https://github.com/xorbitsai/inference>`_ 作为平台"

#: ../../source/examples/ai_podcast.rst:29
msgid "**Detailed Explanation on the Demo Functionality** :"
msgstr ""
msgstr "**关于演示功能的详细说明**:"

#: ../../source/examples/ai_podcast.rst:31
msgid ""
Expand All @@ -87,36 +91,42 @@ msgid ""
"\"username\", where \"username\" is given by user's input. Initialize a "
"empty chat history for the chatroom."
msgstr ""
"启动 XInference, 部署 Wizardlm 模型和 Vicuna 模型。"
"通过为两个模型指定名称并告诉它们有一个名为“username”的人类用户来启动聊天室,其中“username”是由用户输入提供的。然后为聊天室初始化一个空的聊天历史。"

#: ../../source/examples/ai_podcast.rst:35
msgid ""
"Use Audio device to store recording into file, and transcribe the file "
"using OpenAI's Whisper to receive a human readable text as string."
msgstr ""
"使用音频设备将录音存储到文件中,然后使用OpenAI的Whisper将文件转录为人类可读的文本字符串。"

#: ../../source/examples/ai_podcast.rst:37
msgid ""
"Based on the input message string, determine which agents the user want "
"to talk to. Call the target agents and parse in the input string and chat"
" history for the model to generate."
msgstr ""
"基于输入的消息字符串,确定用户想要与哪些代理(模型)进行对话。调用这些目标代理并将用户输入字符串和聊天历史作为输入让模型去生成对应的内容。"

#: ../../source/examples/ai_podcast.rst:40
msgid ""
"When the responses are ready, use Macos's \"Say\" Command to produce "
"audio through speaker. Each agents have their own voice while speaking."
msgstr ""
"当模型的输出准备好时,使用MacOS的“Say”命令通过扬声器生成音频。每个代理在说话时都有自己的声音。"

#: ../../source/examples/ai_podcast.rst:43
msgid ""
"Store the user input and the agent response into chat history, and "
"recursively looping the program until user explicitly says words like "
"\"see you\" in their responses."
msgstr ""
"将用户输入和代理响应存储到聊天历史中,并循环递归程序,直到用户明确在其响应中说出“再见”之类的话语。"

#: ../../source/examples/ai_podcast.rst:46
msgid "**Highlight Features with Xinference** :"
msgstr ""
msgstr "**Xinference的突出特性**:"

#: ../../source/examples/ai_podcast.rst:48
msgid ""
Expand All @@ -125,13 +135,16 @@ msgid ""
"resources, the framework can deploy any amount of models you like at the "
"same time."
msgstr ""
"借助 Xinference 的分布式系统,我们可以轻松在同一会话和同一“聊天室”中部署两个不同的模型。"
"在足够的资源情况下,该框架可以同时部署任意数量的模型。"

#: ../../source/examples/ai_podcast.rst:51
msgid ""
"With Xinference, you can deploy the model easily by just adding a few "
"lines of code. For examples, for launching the vicuna model in the demo, "
"just by::"
msgstr ""
"使用 Xinference,只需添加几行代码就可以轻松部署模型。例如,在演示中启动vicuna模型,只需:"

#: ../../source/examples/ai_podcast.rst:68
msgid ""
Expand All @@ -140,41 +153,35 @@ msgid ""
"the service at selected endpoint. \" You are now ready to play with your "
"llm model."
msgstr ""
"然后,Xinference 客户端将处理“目标模型的下载和缓存”、“为模型设置环境和进程”以及“在选择的端点运行服务”。你现在已经准备好与你的 llm 模型交互。"

#: ../../source/examples/ai_podcast.rst:71
msgid "**Original Demo Video** :"
msgstr ""
msgstr "**原始演示视频**"

#: ../../source/examples/ai_podcast.rst:73
msgid ""
"`🎙️AI Podcast - Voice Conversations with Multiple Agents on M2 Max💻🔥🤖 <"
"https://twitter.com/yichaocheng/status/1679129417778442240>`_"
msgstr ""
"`🎙️AI播客 - 在M2 Max芯片上进行多智能体语音对话💻🔥🤖 <https://twitter.com/yichaocheng/status/1679129417778442240>`_"

#: ../../source/examples/ai_podcast.rst:75
msgid "**Source Code** :"
msgstr ""
msgstr "**源代码**:"

#: ../../source/examples/ai_podcast.rst:77
msgid ""
"`AI_Podcast "
"<https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast.py>`_"
" (English Version)"
msgstr ""
"`AI播客 <https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast.py>`_(英文版)"

#: ../../source/examples/ai_podcast.rst:79
msgid ""
"`AI_Podcast_ZH "
"<https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast_ZH.py>`_"
" (Chinese Version)"
msgstr ""

#~ msgid "AI_Podcast_ZH (Chinese Version)"
#~ msgstr ""

#~ msgid ""
#~ "`AI_Podcast_ZH "
#~ "<https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast_ZH.py>`"
#~ " (Chinese Version)"
#~ msgstr ""

"`AI播客 <https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast_ZH.py>`_(中文版)"
27 changes: 13 additions & 14 deletions doc/source/locale/zh_CN/LC_MESSAGES/examples/chatbot.po
Original file line number Diff line number Diff line change
Expand Up @@ -25,83 +25,82 @@ msgstr "示例:命令行聊天机器人 🤖️"

#: ../../source/examples/chatbot.rst:7
msgid "**Description**:"
msgstr ""
msgstr "**描述**:"

#: ../../source/examples/chatbot.rst:9
msgid ""
"Demonstrate how to interact with Xinference to play with LLM chat "
"functionality with an AI agent in command line💻"
msgstr ""
"演示如何与 Xinference 交互,在命令行中基于 LLM 的聊天功能与 AI 代理互动。💻"

#: ../../source/examples/chatbot.rst:11
msgid "**Used Technology**:"
msgstr ""
msgstr "**涉及技术**:"

#: ../../source/examples/chatbot.rst:13
msgid ""
"@ `ggerganov <https://twitter.com/ggerganov>`_ 's `ggml "
"<https://github.com/ggerganov/ggml>`_"
msgstr ""
"@ `ggerganov <https://twitter.com/ggerganov>`_ `ggml <https://github.com/ggerganov/ggml>`_"

#: ../../source/examples/chatbot.rst:15
msgid "@ `Xinference <https://github.com/xorbitsai/inference>`_ as a launcher"
msgstr ""
"@ `Xinference <https://github.com/xorbitsai/inference>`_ 作为平台"

#: ../../source/examples/chatbot.rst:17
msgid ""
"@ All LLaMA and Chatglm models supported by `Xorbitsio inference "
"<https://github.com/xorbitsai/inference>`_"
msgstr ""
"由 `Xinference 推理 <https://github.com/xorbitsai/inference>`_ 支持的所有 LLaMA 和 Chatglm 模型"

#: ../../source/examples/chatbot.rst:19
msgid "**Detailed Explanation on the Demo Functionality** :"
msgstr ""
msgstr "**关于演示功能的详细说明**:"

#: ../../source/examples/chatbot.rst:21
msgid ""
"Take the user command line input in the terminal and grab the required "
"parameters for model launching."
msgstr ""
"在终端中接受用户的命令行输入,并获取启动模型所需的参数。"

#: ../../source/examples/chatbot.rst:23
msgid ""
"Launch the Xinference frameworks and automatically deploy the model user "
"demanded into the cluster."
msgstr ""
"启动 Xinference 框架,并自动将用户需求的模型部署到集群中。"

#: ../../source/examples/chatbot.rst:25
msgid "Initialize an empty chat history to store all the context in the chatroom."
msgstr ""
"初始化一个空的聊天历史,以存储聊天室中的所有上下文。"

#: ../../source/examples/chatbot.rst:27
msgid ""
"Recursively ask for user's input as prompt and let the model to generate "
"response based on the prompt and the chat history. Show the Output of the"
" response in the terminal."
msgstr ""
"递归地请求用户的输入作为提示词,让模型基于提示词和聊天历史生成响应。在终端中显示响应的输出。"

#: ../../source/examples/chatbot.rst:30
msgid ""
"Store the user's input and agent's response into the chat history as "
"context for the upcoming rounds."
msgstr ""
"将用户的输入和代理的响应存储到聊天历史中,作为即将到来的对话轮次的上下文。"

#: ../../source/examples/chatbot.rst:32
msgid "**Source Code** :"
msgstr ""
msgstr "**源代码**:"

#: ../../source/examples/chatbot.rst:33
msgid ""
"`chat "
"<https://github.com/RayJi01/Xprobe_inference/blob/main/examples/chat.py>`_"
msgstr ""

#~ msgid "Example: chatbot 🤖️"
#~ msgstr ""

#~ msgid ""
#~ "Demonstrate how to interact with "
#~ "Xinference to play with LLM chat "
#~ "functionality with an AI agent 💻"
#~ msgstr ""

Original file line number Diff line number Diff line change
Expand Up @@ -21,76 +21,83 @@ msgstr ""

#: ../../source/examples/gradio_chatinterface.rst:5
msgid "Example: Gradio ChatInterface🤗"
msgstr ""
msgstr "示例:Gradio 聊天界面🤗"

#: ../../source/examples/gradio_chatinterface.rst:7
msgid "**Description**:"
msgstr ""
msgstr "**描述**:"

#: ../../source/examples/gradio_chatinterface.rst:9
msgid ""
"This example showcases how to build a chatbot with 120 lines of code with"
" Gradio ChatInterface and Xinference local LLM"
msgstr ""
"这个例子展示了如何使用Gradio ChatInterface 聊天界面接口和 Xinference 本地LLM构建一个只有120行代码的聊天机器人。"

#: ../../source/examples/gradio_chatinterface.rst:11
msgid "**Used Technology**:"
msgstr ""
msgstr "**涉及技术**:"

#: ../../source/examples/gradio_chatinterface.rst:13
msgid ""
"@ `Xinference <https://github.com/xorbitsai/inference>`_ as a LLM model "
"hosting service"
msgstr ""
"@ `Xinference <https://github.com/xorbitsai/inference>`_ 作为 LLM 模型托管服务"

#: ../../source/examples/gradio_chatinterface.rst:15
msgid ""
"@ `Gradio <https://github.com/gradio-app/gradio>`_ as a web interface for"
" the chatbot"
msgstr ""
"@ `Gradio <https://github.com/gradio-app/gradio>`_ 作为聊天机器人的 Web 界面"

#: ../../source/examples/gradio_chatinterface.rst:17
msgid "**Detailed Explanation on the Demo Functionality** :"
msgstr ""
msgstr "**关于演示功能的详细说明**:"

#: ../../source/examples/gradio_chatinterface.rst:19
msgid ""
"Parse user-provided command line arguments to capture essential model "
"parameters such as model name, size, format, and quantization."
msgstr ""
"解析用户提供的命令行参数,以捕获关键的模型参数,如模型名称、大小、格式和量化方式。"

#: ../../source/examples/gradio_chatinterface.rst:21
msgid ""
"Establish a connection to the Xinference framework and deploy the "
"specified model, ensuring it's ready for real-time interactions."
msgstr ""
"建立与 Xinference 框架的连接并部署指定的模型,确保它准备好进行实时交互。"

#: ../../source/examples/gradio_chatinterface.rst:23
msgid ""
"Implement helper functions (flatten and to_chat) to efficiently handle "
"and store chat interactions, ensuring the model has context for "
"generating relevant responses."
msgstr ""
"实现辅助函数(flatten和to_chat),以高效处理和存储聊天交互,确保模型具有生成相关响应的上下文。"

#: ../../source/examples/gradio_chatinterface.rst:25
msgid ""
"Set up an interactive chat interface using Gradio, allowing users to "
"communicate with the model in a user-friendly environment."
msgstr ""
"使用 Gradio 设置交互式聊天界面,允许用户在用户友好的环境中与模型进行通信。"

#: ../../source/examples/gradio_chatinterface.rst:27
msgid ""
"Activate the Gradio web interface, enabling users to start their chat "
"sessions and receive model-generated responses based on their queries."
msgstr ""
"启动 Gradio Web 界面,使用户能够开始他们的聊天会话,并根据他们的查询接收模型生成的响应。"

#: ../../source/examples/gradio_chatinterface.rst:29
msgid "**Source Code** :"
msgstr ""
msgstr "**源代码**:"

#: ../../source/examples/gradio_chatinterface.rst:30
msgid ""
"`Gradio ChatInterface "
"<https://github.com/xorbitsai/inference/blob/main/examples/gradio_chatinterface.py>`_"
msgstr ""

Loading

0 comments on commit f06eaa8

Please sign in to comment.