From 9997aa6b0bb6bd7200a6f27f7e19573cc9696c71 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Wed, 24 May 2023 21:54:20 -0400 Subject: [PATCH 01/23] add agent notebook and documentation --- notebook/autogen_agent.ipynb | 537 ++++++++++++++++++++++ website/docs/Use-Cases/Auto-Generation.md | 2 + 2 files changed, 539 insertions(+) create mode 100644 notebook/autogen_agent.ipynb diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb new file mode 100644 index 0000000000..30ff6ccdab --- /dev/null +++ b/notebook/autogen_agent.ipynb @@ -0,0 +1,537 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "slideshow": { + "slide_type": "slide" + } + }, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved. \n", + "\n", + "Licensed under the MIT License.\n", + "\n", + "# Interactive LLM Agent for coding\n", + "\n", + "FLAML offers a suite of interactive LLM agents, which can be used to solve various tasks.\n", + "\n", + "In this notebook, we demonstrate how to use `PythonAgent` and `UserProxyAgent` to write code and execute the code. Here `PythonAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `PythonAgent`. By setting `user_interaction_mode` properly, the `UserProxyAgent` can also prompt the user for feedback to `PythonAgent`. For example, when `user_interaction_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `PythonAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `PythonAgent` directly and returns the execution results (success or failure and corresponding outputs) to `PythonAgent`. \n", + "\n", + "## Requirements\n", + "\n", + "FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [openai] option:\n", + "```bash\n", + "pip install flaml[openai]==1.2.4\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "execution": { + "iopub.execute_input": "2023-02-13T23:40:52.317406Z", + "iopub.status.busy": "2023-02-13T23:40:52.316561Z", + "iopub.status.idle": "2023-02-13T23:40:52.321193Z", + "shell.execute_reply": "2023-02-13T23:40:52.320628Z" + } + }, + "outputs": [], + "source": [ + "# %pip install flaml[openai]==1.2.4" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set your API Endpoint\n", + "\n", + "The [`config_list_openai_aoai`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_openai_aoai) function tries to create a list of Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n", + "\n", + "- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n", + "- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n", + "- Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n", + "\n", + "It's OK to have only the OpenAI API key, or only the Azure Open API key + base.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# import openai\n", + "# openai.api_key_path = \"key.txt\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example Task: Create and execute a python script with agents\n", + "\n", + "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from flaml.autogen.agent.coding_agent import PythonAgent\n", + "from flaml.autogen.agent.user_proxy_agent import UserProxyAgent\n", + "config_list = oai.config_list_gpt4_gpt35(key_file_path=KEY_LOC)\n", + "# the purpose of the following two lines is to log the conversation history\n", + "conversations = {}\n", + "oai.ChatCompletion.start_logging(conversations)\n", + "\n", + "# create an assistant which is essentially a PythonAgent instance named \"coding_agent\"\n", + "assistant = PythonAgent(\"coding_agent\", request_timeout=600, seed=42, config_list=config_list, model=\"gpt-3.5-turbo\")\n", + "# create a UserProxyAgent instance named \"user\"\n", + "user = UserProxyAgent(\n", + " \"user\",\n", + " human_input_mode=\"ALWAYS\",\n", + " max_consecutive_auto_reply= 10,\n", + " is_termination_msg=lambda x: x.rstrip().endswith(\"TERMINATE\"),\n", + " )\n", + "# the assistant receives a message from the user, which contains the task description\n", + "assistant.receive(\n", + "\"\"\"Create and execute a script to plot a rocket without using matplotlib\"\"\",\n", + "user,\n", + ")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, let's see how to use the agents to first write the generated script to a file and then execute the script in two sessions of conversation between the `PythonAgent` and the `UserProxyAgent`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# it is suggested to reset the assistant to clear the state if the new task is not related to the previous one.\n", + "assistant.reset()\n", + "assistant.receive(\n", + "\"\"\"Create a temp.py file with the following content:\n", + "```\n", + "print('Hello world!')\n", + "```\"\"\",\n", + "user,\n", + ")\n", + "print(conversations)\n", + "\n", + "oai.ChatCompletion.start_logging(compact=False)\n", + "agent.receive(\"\"\"Execute temp.py\"\"\", user)\n", + "print(oai.ChatCompletion.logged_history)\n", + "oai.ChatCompletion.stop_logging()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.16" + }, + "vscode": { + "interpreter": { + "hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1" + } + }, + "widgets": { + "application/vnd.jupyter.widget-state+json": { + "state": { + "2d910cfd2d2a4fc49fc30fbbdc5576a7": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + }, + "454146d0f7224f038689031002906e6f": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HBoxModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HBoxModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "HBoxView", + "box_style": "", + "children": [ + "IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26", + "IPY_MODEL_577e1e3cc4db4942b0883577b3b52755", + "IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45" + ], + "layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555", + "tabbable": null, + "tooltip": null + } + }, + "577e1e3cc4db4942b0883577b3b52755": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "FloatProgressModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "FloatProgressModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "ProgressView", + "bar_style": "success", + "description": "", + "description_allow_html": false, + "layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7", + "max": 1, + "min": 0, + "orientation": "horizontal", + "style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62", + "tabbable": null, + "tooltip": null, + "value": 1 + } + }, + "6086462a12d54bafa59d3c4566f06cb2": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + }, + "74a6ba0c3cbc4051be0a83e152fe1e62": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "ProgressStyleModel", + "state": { + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "ProgressStyleModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "StyleView", + "bar_color": null, + "description_width": "" + } + }, + "7d3f3d9e15894d05a4d188ff4f466554": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLStyleModel", + "state": { + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLStyleModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "StyleView", + "background": null, + "description_width": "", + "font_size": null, + "text_color": null + } + }, + "b40bdfb1ac1d4cffb7cefcb870c64d45": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "HTMLView", + "description": "", + "description_allow_html": false, + "layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8", + "placeholder": "​", + "style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466", + "tabbable": null, + "tooltip": null, + "value": " 1/1 [00:00<00:00, 44.69it/s]" + } + }, + "ca245376fd9f4354af6b2befe4af4466": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLStyleModel", + "state": { + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLStyleModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "StyleView", + "background": null, + "description_width": "", + "font_size": null, + "text_color": null + } + }, + "dc83c7bff2f241309537a8119dfc7555": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + }, + "e4ae2b6f5a974fd4bafb6abb9d12ff26": { + "model_module": "@jupyter-widgets/controls", + "model_module_version": "2.0.0", + "model_name": "HTMLModel", + "state": { + "_dom_classes": [], + "_model_module": "@jupyter-widgets/controls", + "_model_module_version": "2.0.0", + "_model_name": "HTMLModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/controls", + "_view_module_version": "2.0.0", + "_view_name": "HTMLView", + "description": "", + "description_allow_html": false, + "layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2", + "placeholder": "​", + "style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554", + "tabbable": null, + "tooltip": null, + "value": "100%" + } + }, + "f1355871cc6f4dd4b50d9df5af20e5c8": { + "model_module": "@jupyter-widgets/base", + "model_module_version": "2.0.0", + "model_name": "LayoutModel", + "state": { + "_model_module": "@jupyter-widgets/base", + "_model_module_version": "2.0.0", + "_model_name": "LayoutModel", + "_view_count": null, + "_view_module": "@jupyter-widgets/base", + "_view_module_version": "2.0.0", + "_view_name": "LayoutView", + "align_content": null, + "align_items": null, + "align_self": null, + "border_bottom": null, + "border_left": null, + "border_right": null, + "border_top": null, + "bottom": null, + "display": null, + "flex": null, + "flex_flow": null, + "grid_area": null, + "grid_auto_columns": null, + "grid_auto_flow": null, + "grid_auto_rows": null, + "grid_column": null, + "grid_gap": null, + "grid_row": null, + "grid_template_areas": null, + "grid_template_columns": null, + "grid_template_rows": null, + "height": null, + "justify_content": null, + "justify_items": null, + "left": null, + "margin": null, + "max_height": null, + "max_width": null, + "min_height": null, + "min_width": null, + "object_fit": null, + "object_position": null, + "order": null, + "overflow": null, + "padding": null, + "right": null, + "top": null, + "visibility": null, + "width": null + } + } + }, + "version_major": 2, + "version_minor": 0 + } + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/website/docs/Use-Cases/Auto-Generation.md b/website/docs/Use-Cases/Auto-Generation.md index fde3f90bbf..78ae134edd 100644 --- a/website/docs/Use-Cases/Auto-Generation.md +++ b/website/docs/Use-Cases/Auto-Generation.md @@ -382,6 +382,8 @@ The compact history is more efficient and the individual API call history contai [`flaml.autogen.agents`](../reference/autogen/agent/agent) contains an experimental implementation of interactive agents which can adapt to human or simulated feedback. This subpackage is under active development. +Find a notebook example [here](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent.ipynb). + ## Utilities for Applications ### Code From db3eeff17f849deeaecea1b1302d99fca23b9b13 Mon Sep 17 00:00:00 2001 From: Chi Wang Date: Thu, 25 May 2023 02:51:07 +0000 Subject: [PATCH 02/23] fix bug --- notebook/autogen_agent.ipynb | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 30ff6ccdab..79ef28db4b 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -29,7 +29,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2023-02-13T23:40:52.317406Z", @@ -65,8 +65,9 @@ "metadata": {}, "outputs": [], "source": [ - "# import openai\n", - "# openai.api_key_path = \"key.txt\"" + "from flaml import oai\n", + "\n", + "config_list = oai.config_list_gpt4_gpt35()" ] }, { @@ -87,24 +88,24 @@ "source": [ "from flaml.autogen.agent.coding_agent import PythonAgent\n", "from flaml.autogen.agent.user_proxy_agent import UserProxyAgent\n", - "config_list = oai.config_list_gpt4_gpt35(key_file_path=KEY_LOC)\n", + "\n", "# the purpose of the following two lines is to log the conversation history\n", "conversations = {}\n", "oai.ChatCompletion.start_logging(conversations)\n", "\n", "# create an assistant which is essentially a PythonAgent instance named \"coding_agent\"\n", - "assistant = PythonAgent(\"coding_agent\", request_timeout=600, seed=42, config_list=config_list, model=\"gpt-3.5-turbo\")\n", + "assistant = PythonAgent(\"coding_agent\", request_timeout=600, seed=42, config_list=config_list)\n", "# create a UserProxyAgent instance named \"user\"\n", "user = UserProxyAgent(\n", " \"user\",\n", " human_input_mode=\"ALWAYS\",\n", - " max_consecutive_auto_reply= 10,\n", + " max_consecutive_auto_reply=10,\n", " is_termination_msg=lambda x: x.rstrip().endswith(\"TERMINATE\"),\n", - " )\n", + ")\n", "# the assistant receives a message from the user, which contains the task description\n", "assistant.receive(\n", - "\"\"\"Create and execute a script to plot a rocket without using matplotlib\"\"\",\n", - "user,\n", + " \"\"\"Create and execute a script to plot a rocket without using matplotlib\"\"\",\n", + " user,\n", ")" ] }, @@ -134,7 +135,7 @@ "print(conversations)\n", "\n", "oai.ChatCompletion.start_logging(compact=False)\n", - "agent.receive(\"\"\"Execute temp.py\"\"\", user)\n", + "assistant.receive(\"\"\"Execute temp.py\"\"\", user)\n", "print(oai.ChatCompletion.logged_history)\n", "oai.ChatCompletion.stop_logging()" ] From c634fe2635ef22dc9bacd86366ea0dfbe6b98c7c Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Thu, 25 May 2023 13:48:06 -0400 Subject: [PATCH 03/23] set flush to True when printing msg in agent --- flaml/autogen/agent/agent.py | 4 +- notebook/autogen_agent.ipynb | 166 +++++++++++++++++++++++++++++++++-- 2 files changed, 163 insertions(+), 7 deletions(-) diff --git a/flaml/autogen/agent/agent.py b/flaml/autogen/agent/agent.py index bbac5b0f0d..b81ca35df7 100644 --- a/flaml/autogen/agent/agent.py +++ b/flaml/autogen/agent/agent.py @@ -37,8 +37,8 @@ def _send(self, message, recipient): def _receive(self, message, sender): """Receive a message from another agent.""" - print("\n****", self.name, "received message from", sender.name, "****\n") - print(message) + print("\n****", self.name, "received message from", sender.name, "****\n", flush=True) + print(message, flush=True) self._conversations[sender.name].append({"content": message, "role": "user"}) def receive(self, message, sender): diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 79ef28db4b..be8af03066 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -61,7 +61,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ @@ -82,9 +82,93 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "Create and execute a script to plot a rocket without using matplotlib\n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "Creating a rocket involves using ASCII characters to display it visually. Here's a simple script to get you started:\n", + "\n", + "```python\n", + "# filename: rocket.py\n", + "def plot_rocket():\n", + " rocket = '''\n", + " |\n", + " /_\\\n", + " /^|^\\ \n", + " //| \\\\\n", + " // | \\\\\n", + " '''\n", + " print(rocket)\n", + "\n", + "if __name__ == \"__main__\":\n", + " plot_rocket()\n", + "```\n", + "\n", + "Save this code in a file named `rocket.py` and execute the script. After that, let me know the results.\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: \n", + " |\n", + " /_ /^|^\\ \n", + " //| \\\n", + " // | \\\n", + " \n", + "\n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "It seems that there's an issue with the spacing in the rocket drawing. Let's fix that.\n", + "\n", + "```python\n", + "# filename: rocket.py\n", + "def plot_rocket():\n", + " rocket = '''\n", + " |\n", + " /_\\\\\n", + " /^|^\\\\\n", + " //|\\\\\\\\\n", + " // | \\\\\\\\\n", + " '''\n", + " print(rocket)\n", + "\n", + "if __name__ == \"__main__\":\n", + " plot_rocket()\n", + "```\n", + "\n", + "Save this code in the existing `rocket.py` file and execute the script again. After that, let me know the results.\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: \n", + " |\n", + " /_\\\n", + " /^|^\\\n", + " //|\\\\\n", + " // | \\\\\n", + " \n", + "\n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "Great! The rocket is now displayed correctly. If you have any more tasks or modifications, please let me know. Otherwise, I hope you enjoyed the ASCII rocket!\n", + "\n", + "TERMINATE\n" + ] + } + ], "source": [ "from flaml.autogen.agent.coding_agent import PythonAgent\n", "from flaml.autogen.agent.user_proxy_agent import UserProxyAgent\n", @@ -119,9 +203,81 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 12, "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "Create a temp.py file with the following content:\n", + "```\n", + "print('Hello world!')\n", + "```\n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "Here is the code to create the temp.py file with the specified content. Please execute this code:\n", + "\n", + "```python\n", + "with open('temp.py', 'w') as file:\n", + " file.write(\"print('Hello world!')\")\n", + "```\n", + "\n", + "After executing this code, you should have a file named temp.py with the content:\n", + "\n", + "```\n", + "print('Hello world!')\n", + "```\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: \n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\n", + "\n", + "TERMINATE\n", + "{'[{\"content\": \"You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\\\n If the result indicates there is an error, fix the error and output the code again.\\\\n Reply \\\\\"TERMINATE\\\\\" in the end when the task is done.\\\\n \", \"role\": \"system\"}, {\"content\": \"Create and execute a script to plot a rocket without using matplotlib\", \"role\": \"user\"}, {\"content\": \"Creating a rocket involves using ASCII characters to display it visually. Here\\'s a simple script to get you started:\\\\n\\\\n```python\\\\n# filename: rocket.py\\\\ndef plot_rocket():\\\\n rocket = \\'\\'\\'\\\\n |\\\\n /_\\\\\\\\\\\\n /^|^\\\\\\\\ \\\\n //| \\\\\\\\\\\\\\\\\\\\n // | \\\\\\\\\\\\\\\\\\\\n \\'\\'\\'\\\\n print(rocket)\\\\n\\\\nif __name__ == \\\\\"__main__\\\\\":\\\\n plot_rocket()\\\\n```\\\\n\\\\nSave this code in a file named `rocket.py` and execute the script. After that, let me know the results.\", \"role\": \"assistant\"}, {\"content\": \"exitcode: 0 (execution succeeded)\\\\nCode output: \\\\n |\\\\n /_ /^|^\\\\\\\\ \\\\n //| \\\\\\\\\\\\n // | \\\\\\\\\\\\n \\\\n\", \"role\": \"user\"}, {\"content\": \"It seems that there\\'s an issue with the spacing in the rocket drawing. Let\\'s fix that.\\\\n\\\\n```python\\\\n# filename: rocket.py\\\\ndef plot_rocket():\\\\n rocket = \\'\\'\\'\\\\n |\\\\n /_\\\\\\\\\\\\\\\\\\\\n /^|^\\\\\\\\\\\\\\\\\\\\n //|\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n // | \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n \\'\\'\\'\\\\n print(rocket)\\\\n\\\\nif __name__ == \\\\\"__main__\\\\\":\\\\n plot_rocket()\\\\n```\\\\n\\\\nSave this code in the existing `rocket.py` file and execute the script again. After that, let me know the results.\", \"role\": \"assistant\"}, {\"content\": \"exitcode: 0 (execution succeeded)\\\\nCode output: \\\\n |\\\\n /_\\\\\\\\\\\\n /^|^\\\\\\\\\\\\n //|\\\\\\\\\\\\\\\\\\\\n // | \\\\\\\\\\\\\\\\\\\\n \\\\n\", \"role\": \"user\"}, {\"content\": \"Great! The rocket is now displayed correctly. If you have any more tasks or modifications, please let me know. Otherwise, I hope you enjoyed the ASCII rocket!\\\\n\\\\nTERMINATE\", \"role\": \"assistant\"}]': {'created_at': [0, 1, 2], 'cost': [0.01098, 0.01554, 0.01566]}, '[{\"content\": \"You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\\\n If the result indicates there is an error, fix the error and output the code again.\\\\n Reply \\\\\"TERMINATE\\\\\" in the end when the task is done.\\\\n \", \"role\": \"system\"}, {\"content\": \"Create a temp.py file with the following content:\\\\n```\\\\nprint(\\'Hello world!\\')\\\\n```\", \"role\": \"user\"}, {\"content\": \"Here is the code to create the temp.py file with the specified content. Please execute this code:\\\\n\\\\n```python\\\\nwith open(\\'temp.py\\', \\'w\\') as file:\\\\n file.write(\\\\\"print(\\'Hello world!\\')\\\\\")\\\\n```\\\\n\\\\nAfter executing this code, you should have a file named temp.py with the content:\\\\n\\\\n```\\\\nprint(\\'Hello world!\\')\\\\n```\", \"role\": \"assistant\"}, {\"content\": \"exitcode: 0 (execution succeeded)\\\\nCode output: \", \"role\": \"user\"}, {\"content\": \"Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\\\\n\\\\nTERMINATE\", \"role\": \"assistant\"}]': {'created_at': [3, 4], 'cost': [0.009120000000000001, 0.009869999999999999]}}\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "Execute temp.py\n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "To execute temp.py, run the following code:\n", + "\n", + "```python\n", + "import os\n", + "\n", + "os.system('python temp.py')\n", + "```\n", + "\n", + "This code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\n", + "\n", + "Hello world!\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: Hello world!\n", + "\n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\n", + "\n", + "TERMINATE\n", + "{0: {'request': {'messages': [{'content': 'You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\n If the result indicates there is an error, fix the error and output the code again.\\n Reply \"TERMINATE\" in the end when the task is done.\\n ', 'role': 'system'}, {'content': \"Create a temp.py file with the following content:\\n```\\nprint('Hello world!')\\n```\", 'role': 'user'}, {'content': 'Here is the code to create the temp.py file with the specified content. Please execute this code:\\n\\n```python\\nwith open(\\'temp.py\\', \\'w\\') as file:\\n file.write(\"print(\\'Hello world!\\')\")\\n```\\n\\nAfter executing this code, you should have a file named temp.py with the content:\\n\\n```\\nprint(\\'Hello world!\\')\\n```', 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: ', 'role': 'user'}, {'content': 'Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\\n\\nTERMINATE', 'role': 'assistant'}, {'content': 'Execute temp.py', 'role': 'user'}, {'content': \"To execute temp.py, run the following code:\\n\\n```python\\nimport os\\n\\nos.system('python temp.py')\\n```\\n\\nThis code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\\n\\nHello world!\", 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: Hello world!\\n', 'role': 'user'}, {'content': \"I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\\n\\nTERMINATE\", 'role': 'assistant'}], 'model': 'gpt-4', 'request_timeout': 600}, 'response': {'id': 'chatcmpl-7K97cptjXXXV0McBqOUBchGszCHMd', 'object': 'chat.completion', 'created': 1685035980, 'model': 'gpt-4-0314', 'usage': {'prompt_tokens': 302, 'completion_tokens': 53, 'total_tokens': 355}, 'choices': [{'message': {'role': 'assistant', 'content': \"To execute temp.py, run the following code:\\n\\n```python\\nimport os\\n\\nos.system('python temp.py')\\n```\\n\\nThis code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\\n\\nHello world!\"}, 'finish_reason': 'stop', 'index': 0}], 'cost': 0.012240000000000001}}, 1: {'request': {'messages': [{'content': 'You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\n If the result indicates there is an error, fix the error and output the code again.\\n Reply \"TERMINATE\" in the end when the task is done.\\n ', 'role': 'system'}, {'content': \"Create a temp.py file with the following content:\\n```\\nprint('Hello world!')\\n```\", 'role': 'user'}, {'content': 'Here is the code to create the temp.py file with the specified content. Please execute this code:\\n\\n```python\\nwith open(\\'temp.py\\', \\'w\\') as file:\\n file.write(\"print(\\'Hello world!\\')\")\\n```\\n\\nAfter executing this code, you should have a file named temp.py with the content:\\n\\n```\\nprint(\\'Hello world!\\')\\n```', 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: ', 'role': 'user'}, {'content': 'Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\\n\\nTERMINATE', 'role': 'assistant'}, {'content': 'Execute temp.py', 'role': 'user'}, {'content': \"To execute temp.py, run the following code:\\n\\n```python\\nimport os\\n\\nos.system('python temp.py')\\n```\\n\\nThis code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\\n\\nHello world!\", 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: Hello world!\\n', 'role': 'user'}, {'content': \"I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\\n\\nTERMINATE\", 'role': 'assistant'}], 'model': 'gpt-4', 'request_timeout': 600}, 'response': {'id': 'chatcmpl-7K97qGxiz1iwj1jBawK0ZkreFlJh4', 'object': 'chat.completion', 'created': 1685035994, 'model': 'gpt-4-0314', 'usage': {'prompt_tokens': 378, 'completion_tokens': 36, 'total_tokens': 414}, 'choices': [{'message': {'role': 'assistant', 'content': \"I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\\n\\nTERMINATE\"}, 'finish_reason': 'stop', 'index': 0}], 'cost': 0.0135}}}\n" + ] + } + ], "source": [ "# it is suggested to reset the assistant to clear the state if the new task is not related to the previous one.\n", "assistant.reset()\n", From de1b9e7974e0f5e8419731ba54cd4f8119aea48a Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 00:22:56 -0400 Subject: [PATCH 04/23] add a math problem in agent notebook --- flaml/autogen/agent/user_proxy_agent.py | 1 + notebook/autogen_agent.ipynb | 197 +++++++++++++++++++++++- 2 files changed, 194 insertions(+), 4 deletions(-) diff --git a/flaml/autogen/agent/user_proxy_agent.py b/flaml/autogen/agent/user_proxy_agent.py index 14e663cebe..b3dffa72b0 100644 --- a/flaml/autogen/agent/user_proxy_agent.py +++ b/flaml/autogen/agent/user_proxy_agent.py @@ -119,4 +119,5 @@ def receive(self, message, sender): return self._consecutive_auto_reply_counter[sender.name] += 1 + print("NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...", flush=True) self.auto_reply(message, sender, default_reply=reply) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index be8af03066..33f6eee7f9 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -67,7 +67,9 @@ "source": [ "from flaml import oai\n", "\n", - "config_list = oai.config_list_gpt4_gpt35()" + "config_list = oai.config_list_gpt4_gpt35()\n", + "import openai\n", + "openai.api_key = \"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\"" ] }, { @@ -75,14 +77,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Example Task: Create and execute a python script with agents\n", + "### Example Task 1: Create and execute a python script with agents\n", "\n", "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script." ] }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 2, "metadata": {}, "outputs": [ { @@ -115,6 +117,7 @@ "```\n", "\n", "Save this code in a file named `rocket.py` and execute the script. After that, let me know the results.\n", + "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -148,6 +151,7 @@ "```\n", "\n", "Save this code in the existing `rocket.py` file and execute the script again. After that, let me know the results.\n", + "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -203,7 +207,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 3, "metadata": {}, "outputs": [ { @@ -232,6 +236,7 @@ "```\n", "print('Hello world!')\n", "```\n", + "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -262,6 +267,7 @@ "This code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\n", "\n", "Hello world!\n", + "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -295,6 +301,189 @@ "print(oai.ChatCompletion.logged_history)\n", "oai.ChatCompletion.stop_logging()" ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example Task 2: Math problem solving\n", + "\n", + "Next let's use the agents to solve a math problem." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "\n", + " Find $a + b + c$, given that $x+y \\neq -1$ and \n", + " \\begin{align}\n", + "\tax + by + c & = x + 7,\\\n", + "\ta + bx + cy & = 2x + 6y,\\\n", + "\tay + b + cx & = 4x + y.\n", + "\t\\end{align}.\n", + " \n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "We can solve this system of linear equations using Python. Let me write the code for you:\n", + "\n", + "```python\n", + "import numpy as np\n", + "\n", + "A = np.array([[1, 1, 1], [2, 6, 0], [4, 0, 1]])\n", + "B = np.array([1 + 7, 2 * 1 + 6 * 1, 4 * 1 + 1])\n", + "\n", + "result = np.linalg.solve(A, B)\n", + "sum_result = np.sum(result)\n", + "\n", + "print(\"a + b + c =\", sum_result)\n", + "```\n", + "\n", + "You can execute this code, and it will return the value of $a + b + c$.\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "Your solution is too complicated and expensive. Try to solve the problem smartly. For example instead of writing python code to solve it, you can first observe the equation system to see if there are nice properties that could be used to solve the problem. \n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "You're right; we can simplify the problem by analyzing the given system of equations. Observe that:\n", + "\n", + "1. From the first equation, we have $c = (x + 7) - (ax + by)$\n", + "2. From the second equation, we have $(2x + 6y) = a + bx + cy$\n", + "\n", + "We can substitute $c$ from the first equation into the second equation:\n", + "\n", + "$(2x + 6y) = a + bx + (x + 7) - (ax + by)$\n", + "\n", + "Now, we can simplify:\n", + "\n", + "$a - ax = 7$\n", + "\n", + "Let's solve for $a$ from this equation:\n", + "$a(1 - x) = 7$\n", + "$a = 7 / (1 - x)$\n", + "\n", + "Now, we have the value of $a$ in terms of $x$. We can use this result to substitute back into the second equation:\n", + "\n", + "$2x + 6y = 7 / (1 - x) + bx + (x + 7) - (7x / (1 - x) + by)$\n", + "\n", + "Now let's deal with the third equation:\n", + "\n", + "3. From the third equation, we have $b = (4x + y) - (ay + cx)$\n", + "\n", + "Substitute the value of $a$ and $c$ into the third equation:\n", + "\n", + "$b = (4x + y) - (7y / (1 - x) + (x + 7) - (ax + by))$\n", + "\n", + "We can use these simplified equations to find the sum of $a + b + c$ without calculating the individual value of $a$, $b$, and $c$.\n", + "\n", + "```python\n", + "import sympy as sp\n", + "\n", + "x, y = sp.symbols('x y')\n", + "a = 7 / (1 - x)\n", + "c = (x + 7) - (a * x)\n", + "b = (4 * x + y) - (a * y + c)\n", + "\n", + "sum_result = a + b + c\n", + "sum_result = sp.simplify(sum_result)\n", + "\n", + "print(\"a + b + c =\", sum_result)\n", + "```\n", + "\n", + "Execute this code, and it will give you a simplified expression for $a + b + c$ in terms of $x$ and $y$. Note that this will still give an answer in terms of $x$ and $y$, but will not require solving the full system of equations.\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "Still too complicated. Try again. Solve it smartly. \n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "I apologize for the confusion earlier. Let's try another approach by analyzing the problem.\n", + "\n", + "Add the first and the third equation:\n", + "\n", + "$(ax + by + c) + (ay + b + cx) = (x + 7) + (4x + y)$\n", + "\n", + "Now, by collecting the terms:\n", + "\n", + "$(a+c)x + (b+a)y + (c+b) = 5x + y + 7$\n", + "\n", + "Left-hand side of the equation is symmetrical with respect to a, b, c.\n", + "\n", + "Thus, without even calculating a, b, and c we can find their sum:\n", + "\n", + "$a + b + c = 5$\n", + "\n", + "So the answer is 5. I hope this solution is more satisfactory. Please let me know if you have any questions or if you'd like me to help with any other tasks. \n", + "\n", + "TERMINATE\n", + "\n", + "**** coding_agent received message from user ****\n", + "\n", + "You are almost right about the symmetric property of the equation system is symmetrical. Use this property. Also, use python if needed to aoivd arithmetic errors when manipulating the equation systems.\n", + "\n", + "**** user received message from coding_agent ****\n", + "\n", + "I apologize for the error in my previous response. Let's exploit the symmetric property of the system of equations correctly.\n", + "\n", + "Add all three equations:\n", + "\n", + "$(ax + by + c) + (a + bx + cy) + (ay + b + cx) = (x + 7) + (2x + 6y) + (4x + y)$\n", + "\n", + "Now, by collecting the terms:\n", + "\n", + "$(a+b+c)(x+y) = 7x + 7y + 7$\n", + "\n", + "Since we know that $x + y \\neq -1$, we can safely divide both sides of the equation by $x+y$:\n", + "\n", + "$a+b+c = 7$\n", + "\n", + "So the answer is 7. We didn't need to use Python to solve this particular problem, as the symmetric property allowed us to solve it directly.\n", + "\n", + "TERMINATE\n" + ] + } + ], + "source": [ + "math_problem_to_solve = \"\"\"\n", + " Find $a + b + c$, given that $x+y \\\\neq -1$ and \n", + " \\\\begin{align}\n", + "\tax + by + c & = x + 7,\\\\\n", + "\ta + bx + cy & = 2x + 6y,\\\\\n", + "\tay + b + cx & = 4x + y.\n", + "\t\\\\end{align}.\n", + " \"\"\"\n", + "# the purpose of the following two lines is to log the conversation history\n", + "conversations = {}\n", + "oai.ChatCompletion.start_logging(conversations)\n", + "\n", + "# create an assistant which is essentially a PythonAgent instance named \"coding_agent\"\n", + "assistant = PythonAgent(\"coding_agent\", request_timeout=600, seed=42, config_list=config_list)\n", + "# create a UserProxyAgent instance named \"user\"\n", + "user = UserProxyAgent(\n", + " \"user\",\n", + " human_input_mode=\"ALWAYS\",\n", + " max_consecutive_auto_reply=10,\n", + " is_termination_msg=lambda x: x.rstrip().endswith(\"TERMINATE\"),\n", + ")\n", + "# the assistant receives a message from the user, which contains the task description\n", + "assistant.receive(\n", + " math_problem_to_solve,\n", + " user,\n", + ")" + ] } ], "metadata": { From 38151f8bcbd37a4630b4caa495c5b1080321416c Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 08:55:52 -0400 Subject: [PATCH 05/23] remove --- notebook/autogen_agent.ipynb | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 33f6eee7f9..ba04d253a6 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -67,9 +67,7 @@ "source": [ "from flaml import oai\n", "\n", - "config_list = oai.config_list_gpt4_gpt35()\n", - "import openai\n", - "openai.api_key = \"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\"" + "config_list = oai.config_list_gpt4_gpt35()" ] }, { From 0f8287428c8c064461909c390f72fc3c4b86af9e Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 13:23:56 -0400 Subject: [PATCH 06/23] header --- notebook/autogen_agent.ipynb | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index ba04d253a6..4c9b12c1f8 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -9,11 +9,7 @@ } }, "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved. \n", - "\n", - "Licensed under the MIT License.\n", - "\n", - "# Interactive LLM Agent for coding\n", + "# Interactive LLM Agent for coding and math problem-solving\n", "\n", "FLAML offers a suite of interactive LLM agents, which can be used to solve various tasks.\n", "\n", @@ -48,7 +44,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Set your API Endpoint\n", + "## Set your API Endpoint\n", "\n", "The [`config_list_openai_aoai`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_openai_aoai) function tries to create a list of Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n", "\n", @@ -75,7 +71,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Example Task 1: Create and execute a python script with agents\n", + "## Example Task 1: Create and execute a python script with agents\n", "\n", "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script." ] From 60ed01096d6e6a41fbd4faeb532fd6f6b91661ad Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 14:01:49 -0400 Subject: [PATCH 07/23] improve notebook doc --- flaml/autogen/agent/user_proxy_agent.py | 2 +- notebook/autogen_agent.ipynb | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/flaml/autogen/agent/user_proxy_agent.py b/flaml/autogen/agent/user_proxy_agent.py index b3dffa72b0..8472ef3b82 100644 --- a/flaml/autogen/agent/user_proxy_agent.py +++ b/flaml/autogen/agent/user_proxy_agent.py @@ -119,5 +119,5 @@ def receive(self, message, sender): return self._consecutive_auto_reply_counter[sender.name] += 1 - print("NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...", flush=True) + print(">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...", flush=True) self.auto_reply(message, sender, default_reply=reply) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 4c9b12c1f8..cc32a7ac56 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -171,10 +171,6 @@ "from flaml.autogen.agent.coding_agent import PythonAgent\n", "from flaml.autogen.agent.user_proxy_agent import UserProxyAgent\n", "\n", - "# the purpose of the following two lines is to log the conversation history\n", - "conversations = {}\n", - "oai.ChatCompletion.start_logging(conversations)\n", - "\n", "# create an assistant which is essentially a PythonAgent instance named \"coding_agent\"\n", "assistant = PythonAgent(\"coding_agent\", request_timeout=600, seed=42, config_list=config_list)\n", "# create a UserProxyAgent instance named \"user\"\n", @@ -196,7 +192,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Next, let's see how to use the agents to first write the generated script to a file and then execute the script in two sessions of conversation between the `PythonAgent` and the `UserProxyAgent`." + "Next, let's see how to use the agents to first write the generated script to a file and then execute the script in two sessions of conversation between the `PythonAgent` and the `UserProxyAgent`. In this example, we also log the conversation between the two agents using the logging functionality in FLAML. " ] }, { @@ -279,6 +275,10 @@ } ], "source": [ + "# the purpose of the following two lines is to log the conversation history\n", + "conversations = {}\n", + "oai.ChatCompletion.start_logging(conversations)\n", + "\n", "# it is suggested to reset the assistant to clear the state if the new task is not related to the previous one.\n", "assistant.reset()\n", "assistant.receive(\n", @@ -301,9 +301,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Example Task 2: Math problem solving\n", + "## Example Task 2: Math problem solving\n", "\n", - "Next let's use the agents to solve a math problem." + "Next, let's use the agents to solve a math problem. In the examples shown above on coding tasks, the real user is not providing any inputs when prompted for feedback. Thus UserProxyAgent provides auto replies to the PythonAgent. In this example, we demonstrate an example where the real user provides feedback to the PythonAgent via the UserProxyAgent." ] }, { From 74fb4595ddd9762fcaa511ea0ed7792920302ff9 Mon Sep 17 00:00:00 2001 From: Chi Wang Date: Fri, 26 May 2023 18:31:56 +0000 Subject: [PATCH 08/23] notebook update --- notebook/autogen_agent.ipynb | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index cc32a7ac56..817bfb70d7 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -9,11 +9,11 @@ } }, "source": [ - "# Interactive LLM Agent for coding and math problem-solving\n", + "# Interactive LLM Agent\n", "\n", - "FLAML offers a suite of interactive LLM agents, which can be used to solve various tasks.\n", + "FLAML offers an experimental feature of interactive LLM agents, which can be used to solve various tasks, including coding and math problem-solving.\n", "\n", - "In this notebook, we demonstrate how to use `PythonAgent` and `UserProxyAgent` to write code and execute the code. Here `PythonAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `PythonAgent`. By setting `user_interaction_mode` properly, the `UserProxyAgent` can also prompt the user for feedback to `PythonAgent`. For example, when `user_interaction_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `PythonAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `PythonAgent` directly and returns the execution results (success or failure and corresponding outputs) to `PythonAgent`. \n", + "In this notebook, we demonstrate how to use `PythonAgent` and `UserProxyAgent` to write code and execute the code. Here `PythonAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `PythonAgent`. By setting `user_interaction_mode` properly, the `UserProxyAgent` can also prompt the user for feedback to `PythonAgent`. For example, when `user_interaction_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `PythonAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `PythonAgent` directly and returns the execution results (success or failure and corresponding outputs) to `PythonAgent`.\n", "\n", "## Requirements\n", "\n", @@ -46,7 +46,7 @@ "source": [ "## Set your API Endpoint\n", "\n", - "The [`config_list_openai_aoai`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_openai_aoai) function tries to create a list of Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n", + "The [`config_list_gpt4_gpt35`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_gpt4_gpt35) function tries to create a list of gpt-4 and gpt-3.5 configurations using Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n", "\n", "- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n", "- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n", @@ -275,10 +275,6 @@ } ], "source": [ - "# the purpose of the following two lines is to log the conversation history\n", - "conversations = {}\n", - "oai.ChatCompletion.start_logging(conversations)\n", - "\n", "# it is suggested to reset the assistant to clear the state if the new task is not related to the previous one.\n", "assistant.reset()\n", "assistant.receive(\n", @@ -288,12 +284,8 @@ "```\"\"\",\n", "user,\n", ")\n", - "print(conversations)\n", "\n", - "oai.ChatCompletion.start_logging(compact=False)\n", - "assistant.receive(\"\"\"Execute temp.py\"\"\", user)\n", - "print(oai.ChatCompletion.logged_history)\n", - "oai.ChatCompletion.stop_logging()" + "assistant.receive(\"\"\"Execute temp.py\"\"\", user)" ] }, { @@ -496,7 +488,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" + "version": "3.9.15" }, "vscode": { "interpreter": { From a9aa9f9ea33a748b6cd9a6e2d57887e78f8ad047 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 16:19:26 -0400 Subject: [PATCH 09/23] improve notebook example --- notebook/autogen_agent.ipynb | 203 ++++------------------------------- 1 file changed, 20 insertions(+), 183 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 817bfb70d7..b697d4d769 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -71,7 +71,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Example Task 1: Create and execute a python script with agents\n", + "## Example Task: Create and execute a python script with agents\n", "\n", "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script." ] @@ -111,7 +111,7 @@ "```\n", "\n", "Save this code in a file named `rocket.py` and execute the script. After that, let me know the results.\n", - "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", + ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -145,7 +145,7 @@ "```\n", "\n", "Save this code in the existing `rocket.py` file and execute the script again. After that, let me know the results.\n", - "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", + ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -226,7 +226,7 @@ "```\n", "print('Hello world!')\n", "```\n", - "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", + ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -237,40 +237,7 @@ "\n", "Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\n", "\n", - "TERMINATE\n", - "{'[{\"content\": \"You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\\\n If the result indicates there is an error, fix the error and output the code again.\\\\n Reply \\\\\"TERMINATE\\\\\" in the end when the task is done.\\\\n \", \"role\": \"system\"}, {\"content\": \"Create and execute a script to plot a rocket without using matplotlib\", \"role\": \"user\"}, {\"content\": \"Creating a rocket involves using ASCII characters to display it visually. Here\\'s a simple script to get you started:\\\\n\\\\n```python\\\\n# filename: rocket.py\\\\ndef plot_rocket():\\\\n rocket = \\'\\'\\'\\\\n |\\\\n /_\\\\\\\\\\\\n /^|^\\\\\\\\ \\\\n //| \\\\\\\\\\\\\\\\\\\\n // | \\\\\\\\\\\\\\\\\\\\n \\'\\'\\'\\\\n print(rocket)\\\\n\\\\nif __name__ == \\\\\"__main__\\\\\":\\\\n plot_rocket()\\\\n```\\\\n\\\\nSave this code in a file named `rocket.py` and execute the script. After that, let me know the results.\", \"role\": \"assistant\"}, {\"content\": \"exitcode: 0 (execution succeeded)\\\\nCode output: \\\\n |\\\\n /_ /^|^\\\\\\\\ \\\\n //| \\\\\\\\\\\\n // | \\\\\\\\\\\\n \\\\n\", \"role\": \"user\"}, {\"content\": \"It seems that there\\'s an issue with the spacing in the rocket drawing. Let\\'s fix that.\\\\n\\\\n```python\\\\n# filename: rocket.py\\\\ndef plot_rocket():\\\\n rocket = \\'\\'\\'\\\\n |\\\\n /_\\\\\\\\\\\\\\\\\\\\n /^|^\\\\\\\\\\\\\\\\\\\\n //|\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n // | \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n \\'\\'\\'\\\\n print(rocket)\\\\n\\\\nif __name__ == \\\\\"__main__\\\\\":\\\\n plot_rocket()\\\\n```\\\\n\\\\nSave this code in the existing `rocket.py` file and execute the script again. After that, let me know the results.\", \"role\": \"assistant\"}, {\"content\": \"exitcode: 0 (execution succeeded)\\\\nCode output: \\\\n |\\\\n /_\\\\\\\\\\\\n /^|^\\\\\\\\\\\\n //|\\\\\\\\\\\\\\\\\\\\n // | \\\\\\\\\\\\\\\\\\\\n \\\\n\", \"role\": \"user\"}, {\"content\": \"Great! The rocket is now displayed correctly. If you have any more tasks or modifications, please let me know. Otherwise, I hope you enjoyed the ASCII rocket!\\\\n\\\\nTERMINATE\", \"role\": \"assistant\"}]': {'created_at': [0, 1, 2], 'cost': [0.01098, 0.01554, 0.01566]}, '[{\"content\": \"You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\\\n If the result indicates there is an error, fix the error and output the code again.\\\\n Reply \\\\\"TERMINATE\\\\\" in the end when the task is done.\\\\n \", \"role\": \"system\"}, {\"content\": \"Create a temp.py file with the following content:\\\\n```\\\\nprint(\\'Hello world!\\')\\\\n```\", \"role\": \"user\"}, {\"content\": \"Here is the code to create the temp.py file with the specified content. Please execute this code:\\\\n\\\\n```python\\\\nwith open(\\'temp.py\\', \\'w\\') as file:\\\\n file.write(\\\\\"print(\\'Hello world!\\')\\\\\")\\\\n```\\\\n\\\\nAfter executing this code, you should have a file named temp.py with the content:\\\\n\\\\n```\\\\nprint(\\'Hello world!\\')\\\\n```\", \"role\": \"assistant\"}, {\"content\": \"exitcode: 0 (execution succeeded)\\\\nCode output: \", \"role\": \"user\"}, {\"content\": \"Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\\\\n\\\\nTERMINATE\", \"role\": \"assistant\"}]': {'created_at': [3, 4], 'cost': [0.009120000000000001, 0.009869999999999999]}}\n", - "\n", - "**** coding_agent received message from user ****\n", - "\n", - "Execute temp.py\n", - "\n", - "**** user received message from coding_agent ****\n", - "\n", - "To execute temp.py, run the following code:\n", - "\n", - "```python\n", - "import os\n", - "\n", - "os.system('python temp.py')\n", - "```\n", - "\n", - "This code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\n", - "\n", - "Hello world!\n", - "NO HUMAN INPUT RECEIVED. TRYING TO AUTO REPLY FOR THE USR...\n", - "\n", - "**** coding_agent received message from user ****\n", - "\n", - "exitcode: 0 (execution succeeded)\n", - "Code output: Hello world!\n", - "\n", - "\n", - "**** user received message from coding_agent ****\n", - "\n", - "I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\n", - "\n", - "TERMINATE\n", - "{0: {'request': {'messages': [{'content': 'You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\n If the result indicates there is an error, fix the error and output the code again.\\n Reply \"TERMINATE\" in the end when the task is done.\\n ', 'role': 'system'}, {'content': \"Create a temp.py file with the following content:\\n```\\nprint('Hello world!')\\n```\", 'role': 'user'}, {'content': 'Here is the code to create the temp.py file with the specified content. Please execute this code:\\n\\n```python\\nwith open(\\'temp.py\\', \\'w\\') as file:\\n file.write(\"print(\\'Hello world!\\')\")\\n```\\n\\nAfter executing this code, you should have a file named temp.py with the content:\\n\\n```\\nprint(\\'Hello world!\\')\\n```', 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: ', 'role': 'user'}, {'content': 'Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\\n\\nTERMINATE', 'role': 'assistant'}, {'content': 'Execute temp.py', 'role': 'user'}, {'content': \"To execute temp.py, run the following code:\\n\\n```python\\nimport os\\n\\nos.system('python temp.py')\\n```\\n\\nThis code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\\n\\nHello world!\", 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: Hello world!\\n', 'role': 'user'}, {'content': \"I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\\n\\nTERMINATE\", 'role': 'assistant'}], 'model': 'gpt-4', 'request_timeout': 600}, 'response': {'id': 'chatcmpl-7K97cptjXXXV0McBqOUBchGszCHMd', 'object': 'chat.completion', 'created': 1685035980, 'model': 'gpt-4-0314', 'usage': {'prompt_tokens': 302, 'completion_tokens': 53, 'total_tokens': 355}, 'choices': [{'message': {'role': 'assistant', 'content': \"To execute temp.py, run the following code:\\n\\n```python\\nimport os\\n\\nos.system('python temp.py')\\n```\\n\\nThis code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\\n\\nHello world!\"}, 'finish_reason': 'stop', 'index': 0}], 'cost': 0.012240000000000001}}, 1: {'request': {'messages': [{'content': 'You suggest python code (in a python coding block) for a user to execute for a given task. If you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Finish the task smartly. Don\\'t suggest shell command. Don\\'t include multiple code blocks in one response. Use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\n If the result indicates there is an error, fix the error and output the code again.\\n Reply \"TERMINATE\" in the end when the task is done.\\n ', 'role': 'system'}, {'content': \"Create a temp.py file with the following content:\\n```\\nprint('Hello world!')\\n```\", 'role': 'user'}, {'content': 'Here is the code to create the temp.py file with the specified content. Please execute this code:\\n\\n```python\\nwith open(\\'temp.py\\', \\'w\\') as file:\\n file.write(\"print(\\'Hello world!\\')\")\\n```\\n\\nAfter executing this code, you should have a file named temp.py with the content:\\n\\n```\\nprint(\\'Hello world!\\')\\n```', 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: ', 'role': 'user'}, {'content': 'Great! The temp.py file has been created successfully. Now, you can run this file to see the output. If you need any further assistance, feel free to ask.\\n\\nTERMINATE', 'role': 'assistant'}, {'content': 'Execute temp.py', 'role': 'user'}, {'content': \"To execute temp.py, run the following code:\\n\\n```python\\nimport os\\n\\nos.system('python temp.py')\\n```\\n\\nThis code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\\n\\nHello world!\", 'role': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: Hello world!\\n', 'role': 'user'}, {'content': \"I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\\n\\nTERMINATE\", 'role': 'assistant'}], 'model': 'gpt-4', 'request_timeout': 600}, 'response': {'id': 'chatcmpl-7K97qGxiz1iwj1jBawK0ZkreFlJh4', 'object': 'chat.completion', 'created': 1685035994, 'model': 'gpt-4-0314', 'usage': {'prompt_tokens': 378, 'completion_tokens': 36, 'total_tokens': 414}, 'choices': [{'message': {'role': 'assistant', 'content': \"I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\\n\\nTERMINATE\"}, 'finish_reason': 'stop', 'index': 0}], 'cost': 0.0135}}}\n" + "TERMINATE\n" ] } ], @@ -283,9 +250,7 @@ "print('Hello world!')\n", "```\"\"\",\n", "user,\n", - ")\n", - "\n", - "assistant.receive(\"\"\"Execute temp.py\"\"\", user)" + ")" ] }, { @@ -293,14 +258,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Example Task 2: Math problem solving\n", - "\n", - "Next, let's use the agents to solve a math problem. In the examples shown above on coding tasks, the real user is not providing any inputs when prompted for feedback. Thus UserProxyAgent provides auto replies to the PythonAgent. In this example, we demonstrate an example where the real user provides feedback to the PythonAgent via the UserProxyAgent." + "Now that the temp.py has been created. Let's prompt the assistant to execute the code in the file. " ] }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 4, "metadata": {}, "outputs": [ { @@ -310,165 +273,39 @@ "\n", "**** coding_agent received message from user ****\n", "\n", - "\n", - " Find $a + b + c$, given that $x+y \\neq -1$ and \n", - " \\begin{align}\n", - "\tax + by + c & = x + 7,\\\n", - "\ta + bx + cy & = 2x + 6y,\\\n", - "\tay + b + cx & = 4x + y.\n", - "\t\\end{align}.\n", - " \n", + "Execute temp.py\n", "\n", "**** user received message from coding_agent ****\n", "\n", - "We can solve this system of linear equations using Python. Let me write the code for you:\n", + "To execute temp.py, run the following code:\n", "\n", "```python\n", - "import numpy as np\n", - "\n", - "A = np.array([[1, 1, 1], [2, 6, 0], [4, 0, 1]])\n", - "B = np.array([1 + 7, 2 * 1 + 6 * 1, 4 * 1 + 1])\n", - "\n", - "result = np.linalg.solve(A, B)\n", - "sum_result = np.sum(result)\n", + "import os\n", "\n", - "print(\"a + b + c =\", sum_result)\n", + "os.system('python temp.py')\n", "```\n", "\n", - "You can execute this code, and it will return the value of $a + b + c$.\n", - "\n", - "**** coding_agent received message from user ****\n", - "\n", - "Your solution is too complicated and expensive. Try to solve the problem smartly. For example instead of writing python code to solve it, you can first observe the equation system to see if there are nice properties that could be used to solve the problem. \n", - "\n", - "**** user received message from coding_agent ****\n", - "\n", - "You're right; we can simplify the problem by analyzing the given system of equations. Observe that:\n", - "\n", - "1. From the first equation, we have $c = (x + 7) - (ax + by)$\n", - "2. From the second equation, we have $(2x + 6y) = a + bx + cy$\n", - "\n", - "We can substitute $c$ from the first equation into the second equation:\n", - "\n", - "$(2x + 6y) = a + bx + (x + 7) - (ax + by)$\n", - "\n", - "Now, we can simplify:\n", - "\n", - "$a - ax = 7$\n", - "\n", - "Let's solve for $a$ from this equation:\n", - "$a(1 - x) = 7$\n", - "$a = 7 / (1 - x)$\n", - "\n", - "Now, we have the value of $a$ in terms of $x$. We can use this result to substitute back into the second equation:\n", - "\n", - "$2x + 6y = 7 / (1 - x) + bx + (x + 7) - (7x / (1 - x) + by)$\n", - "\n", - "Now let's deal with the third equation:\n", - "\n", - "3. From the third equation, we have $b = (4x + y) - (ay + cx)$\n", - "\n", - "Substitute the value of $a$ and $c$ into the third equation:\n", - "\n", - "$b = (4x + y) - (7y / (1 - x) + (x + 7) - (ax + by))$\n", - "\n", - "We can use these simplified equations to find the sum of $a + b + c$ without calculating the individual value of $a$, $b$, and $c$.\n", - "\n", - "```python\n", - "import sympy as sp\n", - "\n", - "x, y = sp.symbols('x y')\n", - "a = 7 / (1 - x)\n", - "c = (x + 7) - (a * x)\n", - "b = (4 * x + y) - (a * y + c)\n", - "\n", - "sum_result = a + b + c\n", - "sum_result = sp.simplify(sum_result)\n", - "\n", - "print(\"a + b + c =\", sum_result)\n", - "```\n", + "This code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\n", "\n", - "Execute this code, and it will give you a simplified expression for $a + b + c$ in terms of $x$ and $y$. Note that this will still give an answer in terms of $x$ and $y$, but will not require solving the full system of equations.\n", + "Hello world!\n", + ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", - "Still too complicated. Try again. Solve it smartly. \n", - "\n", - "**** user received message from coding_agent ****\n", - "\n", - "I apologize for the confusion earlier. Let's try another approach by analyzing the problem.\n", - "\n", - "Add the first and the third equation:\n", - "\n", - "$(ax + by + c) + (ay + b + cx) = (x + 7) + (4x + y)$\n", - "\n", - "Now, by collecting the terms:\n", - "\n", - "$(a+c)x + (b+a)y + (c+b) = 5x + y + 7$\n", - "\n", - "Left-hand side of the equation is symmetrical with respect to a, b, c.\n", - "\n", - "Thus, without even calculating a, b, and c we can find their sum:\n", - "\n", - "$a + b + c = 5$\n", - "\n", - "So the answer is 5. I hope this solution is more satisfactory. Please let me know if you have any questions or if you'd like me to help with any other tasks. \n", - "\n", - "TERMINATE\n", - "\n", - "**** coding_agent received message from user ****\n", + "exitcode: 0 (execution succeeded)\n", + "Code output: Hello world!\n", "\n", - "You are almost right about the symmetric property of the equation system is symmetrical. Use this property. Also, use python if needed to aoivd arithmetic errors when manipulating the equation systems.\n", "\n", "**** user received message from coding_agent ****\n", "\n", - "I apologize for the error in my previous response. Let's exploit the symmetric property of the system of equations correctly.\n", - "\n", - "Add all three equations:\n", - "\n", - "$(ax + by + c) + (a + bx + cy) + (ay + b + cx) = (x + 7) + (2x + 6y) + (4x + y)$\n", - "\n", - "Now, by collecting the terms:\n", - "\n", - "$(a+b+c)(x+y) = 7x + 7y + 7$\n", - "\n", - "Since we know that $x + y \\neq -1$, we can safely divide both sides of the equation by $x+y$:\n", - "\n", - "$a+b+c = 7$\n", - "\n", - "So the answer is 7. We didn't need to use Python to solve this particular problem, as the symmetric property allowed us to solve it directly.\n", + "I'm glad that the code execution was successful and you got the desired output! If you need any further help or assistance with another task, feel free to ask.\n", "\n", "TERMINATE\n" ] } ], "source": [ - "math_problem_to_solve = \"\"\"\n", - " Find $a + b + c$, given that $x+y \\\\neq -1$ and \n", - " \\\\begin{align}\n", - "\tax + by + c & = x + 7,\\\\\n", - "\ta + bx + cy & = 2x + 6y,\\\\\n", - "\tay + b + cx & = 4x + y.\n", - "\t\\\\end{align}.\n", - " \"\"\"\n", - "# the purpose of the following two lines is to log the conversation history\n", - "conversations = {}\n", - "oai.ChatCompletion.start_logging(conversations)\n", - "\n", - "# create an assistant which is essentially a PythonAgent instance named \"coding_agent\"\n", - "assistant = PythonAgent(\"coding_agent\", request_timeout=600, seed=42, config_list=config_list)\n", - "# create a UserProxyAgent instance named \"user\"\n", - "user = UserProxyAgent(\n", - " \"user\",\n", - " human_input_mode=\"ALWAYS\",\n", - " max_consecutive_auto_reply=10,\n", - " is_termination_msg=lambda x: x.rstrip().endswith(\"TERMINATE\"),\n", - ")\n", - "# the assistant receives a message from the user, which contains the task description\n", - "assistant.receive(\n", - " math_problem_to_solve,\n", - " user,\n", - ")" + "assistant.receive(\"\"\"Execute temp.py\"\"\", user)" ] } ], @@ -488,7 +325,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.15" + "version": "3.9.16" }, "vscode": { "interpreter": { From cf364700a8ce62c52877399d7e7e5592bc3f7c93 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 17:01:17 -0400 Subject: [PATCH 10/23] improve doc --- notebook/autogen_agent.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index b697d4d769..22da20fd75 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -192,7 +192,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Next, let's see how to use the agents to first write the generated script to a file and then execute the script in two sessions of conversation between the `PythonAgent` and the `UserProxyAgent`. In this example, we also log the conversation between the two agents using the logging functionality in FLAML. " + "Next, let's see how to use the agents to first write the generated script to a file and then execute the script in two sessions of conversation between the `PythonAgent` and the `UserProxyAgent`." ] }, { @@ -258,7 +258,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now that the temp.py has been created. Let's prompt the assistant to execute the code in the file. " + "If the code block above works properly, a file named `temp.py` will be created and saved in the default working directory `flaml/autogen/extensions` (You could also specify a particular working directory to use by setting the `work_dir` argument in the `receive()` function.) Let's now prompt the assistant to execute the code in the file with the following line of code. " ] }, { From a400c53eb933b382a5e2d240bb493e2d37e2a396 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 22:56:04 -0400 Subject: [PATCH 11/23] improve notebook doc --- notebook/autogen_agent.ipynb | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 22da20fd75..6414af3d8c 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -111,7 +111,8 @@ "```\n", "\n", "Save this code in a file named `rocket.py` and execute the script. After that, let me know the results.\n", - ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", + "\n", + " >>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -145,7 +146,8 @@ "```\n", "\n", "Save this code in the existing `rocket.py` file and execute the script again. After that, let me know the results.\n", - ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", + "\n", + " >>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -226,7 +228,8 @@ "```\n", "print('Hello world!')\n", "```\n", - ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", + "\n", + " >>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", @@ -258,7 +261,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "If the code block above works properly, a file named `temp.py` will be created and saved in the default working directory `flaml/autogen/extensions` (You could also specify a particular working directory to use by setting the `work_dir` argument in the `receive()` function.) Let's now prompt the assistant to execute the code in the file with the following line of code. " + "The example above involves code execution. In FLAML, code execution is triggered automatically by the `UserProxyAgent` when it detects an executable code block in a received message and no human user input is provided. This process occurs in a designated working directory, using a Docker container by default. Unless a specific directory is specified, FLAML defaults to the `flaml/autogen/extensions` directory. Users have the option to specify a different working directory by setting the `work_dir` argument when constructing a new instance of the `UserProxyAgent`.\n", + "\n", + "Upon successful execution of the preceding code block, a file named `temp.py` will be created and saved in the default working directory `flaml/autogen/extensions`. Now, let's prompt the assistant to execute the code contained within this file using the following line of code." ] }, { @@ -288,7 +293,8 @@ "This code imports the os module and then runs the temp.py file. After executing this code, you should see the output:\n", "\n", "Hello world!\n", - ">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", + "\n", + " >>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n", "\n", "**** coding_agent received message from user ****\n", "\n", From e6e3fc43c0a32034ae96b97dcce4ce45344655c7 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 22:57:13 -0400 Subject: [PATCH 12/23] improve print --- flaml/autogen/agent/user_proxy_agent.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/flaml/autogen/agent/user_proxy_agent.py b/flaml/autogen/agent/user_proxy_agent.py index 8472ef3b82..3e651c29fc 100644 --- a/flaml/autogen/agent/user_proxy_agent.py +++ b/flaml/autogen/agent/user_proxy_agent.py @@ -119,5 +119,5 @@ def receive(self, message, sender): return self._consecutive_auto_reply_counter[sender.name] += 1 - print(">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...", flush=True) + print("\n >>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...", flush=True) self.auto_reply(message, sender, default_reply=reply) From ff462e490343cee0d399f10a5e22d986473e4166 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 23:08:08 -0400 Subject: [PATCH 13/23] doc --- website/docs/Use-Cases/Auto-Generation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/Use-Cases/Auto-Generation.md b/website/docs/Use-Cases/Auto-Generation.md index 03b2c3d382..91fc11d9ef 100644 --- a/website/docs/Use-Cases/Auto-Generation.md +++ b/website/docs/Use-Cases/Auto-Generation.md @@ -380,7 +380,7 @@ The compact history is more efficient and the individual API call history contai [`flaml.autogen.agents`](../reference/autogen/agent/agent) contains an experimental implementation of interactive agents which can adapt to human or simulated feedback. This subpackage is under active development. -Find a notebook example [here](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent.ipynb). +Find a notebook example on how to [use agents in FLAML to solve coding tasks](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent.ipynb). ## Utilities for Applications From 5dbee0cc4db7fd796432b99a8bec6cd4f29f7d59 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 23:25:30 -0400 Subject: [PATCH 14/23] human_input_mode --- notebook/autogen_agent.ipynb | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 6414af3d8c..a51b5d026f 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -73,12 +73,12 @@ "source": [ "## Example Task: Create and execute a python script with agents\n", "\n", - "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script." + "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script. This process involves constructing a `PythonAgent` to serve as the assistant, along with a `UserProxyAgent` that acts as a proxy for the human user. In this example demonstrated below, when constructing the `UserProxyAgent`, we select the `human_input_mode` to \"Never\". This means that the `UserProxyAgent` will not solicit feedback from the human user until the limit defined by `max_consecutive_auto_reply` is reached. For the purpose of this example, we've set this limit to 10." ] }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 6, "metadata": {}, "outputs": [ { @@ -178,7 +178,7 @@ "# create a UserProxyAgent instance named \"user\"\n", "user = UserProxyAgent(\n", " \"user\",\n", - " human_input_mode=\"ALWAYS\",\n", + " human_input_mode=\"NEVER\",\n", " max_consecutive_auto_reply=10,\n", " is_termination_msg=lambda x: x.rstrip().endswith(\"TERMINATE\"),\n", ")\n", @@ -199,7 +199,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 7, "metadata": {}, "outputs": [ { @@ -268,7 +268,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 8, "metadata": {}, "outputs": [ { From 2da26bb7ad369c686d02b94d94aff7fee7e7ae27 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 23:26:46 -0400 Subject: [PATCH 15/23] human_input_mode str --- notebook/autogen_agent.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index a51b5d026f..d0ade361e3 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -73,7 +73,7 @@ "source": [ "## Example Task: Create and execute a python script with agents\n", "\n", - "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script. This process involves constructing a `PythonAgent` to serve as the assistant, along with a `UserProxyAgent` that acts as a proxy for the human user. In this example demonstrated below, when constructing the `UserProxyAgent`, we select the `human_input_mode` to \"Never\". This means that the `UserProxyAgent` will not solicit feedback from the human user until the limit defined by `max_consecutive_auto_reply` is reached. For the purpose of this example, we've set this limit to 10." + "In the example below, let's see how to use the agents in FLAML to write a python script and execute the script. This process involves constructing a `PythonAgent` to serve as the assistant, along with a `UserProxyAgent` that acts as a proxy for the human user. In this example demonstrated below, when constructing the `UserProxyAgent`, we select the `human_input_mode` to \"NEVER\". This means that the `UserProxyAgent` will not solicit feedback from the human user until the limit defined by `max_consecutive_auto_reply` is reached. For the purpose of this example, we've set this limit to 10." ] }, { From d28892c47f1b128b86cadb19aad31426943ec9bc Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Fri, 26 May 2023 23:44:27 -0400 Subject: [PATCH 16/23] indent --- notebook/autogen_agent.ipynb | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index d0ade361e3..af7ee58b8e 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -248,11 +248,11 @@ "# it is suggested to reset the assistant to clear the state if the new task is not related to the previous one.\n", "assistant.reset()\n", "assistant.receive(\n", - "\"\"\"Create a temp.py file with the following content:\n", - "```\n", - "print('Hello world!')\n", - "```\"\"\",\n", - "user,\n", + "\t\"\"\"Create a temp.py file with the following content:\n", + "\t```\n", + "\tprint('Hello world!')\n", + "\t```\"\"\",\n", + "\tuser,\n", ")" ] }, From f9c1770e02fcd8e41e9b9d49270cd98f525a4a63 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Sat, 27 May 2023 00:13:09 -0400 Subject: [PATCH 17/23] indent --- notebook/autogen_agent.ipynb | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index af7ee58b8e..b0b4554286 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -248,11 +248,11 @@ "# it is suggested to reset the assistant to clear the state if the new task is not related to the previous one.\n", "assistant.reset()\n", "assistant.receive(\n", - "\t\"\"\"Create a temp.py file with the following content:\n", - "\t```\n", - "\tprint('Hello world!')\n", - "\t```\"\"\",\n", - "\tuser,\n", + " \"\"\"Create a temp.py file with the following content:\n", + " ```\n", + " print('Hello world!')\n", + " ```\"\"\",\n", + " user,\n", ")" ] }, From 3422bdc94cbe1d81724b91e69d2ce84ec6921db0 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Sat, 27 May 2023 08:41:41 -0400 Subject: [PATCH 18/23] Update flaml/autogen/agent/user_proxy_agent.py Co-authored-by: Chi Wang --- flaml/autogen/agent/user_proxy_agent.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/flaml/autogen/agent/user_proxy_agent.py b/flaml/autogen/agent/user_proxy_agent.py index 3bf209c4c8..f1975dcd34 100644 --- a/flaml/autogen/agent/user_proxy_agent.py +++ b/flaml/autogen/agent/user_proxy_agent.py @@ -119,5 +119,5 @@ def receive(self, message, sender): return self._consecutive_auto_reply_counter[sender.name] += 1 - print("\n >>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...", flush=True) + print("\n>>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...", flush=True) self.auto_reply(message, sender, default_reply=reply) From 9ab7379706129ae31352293410001b1a22e328cc Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Sat, 27 May 2023 14:22:58 -0400 Subject: [PATCH 19/23] Update notebook/autogen_agent.ipynb Co-authored-by: Chi Wang --- notebook/autogen_agent.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index b0b4554286..de49b9cd70 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -17,7 +17,7 @@ "\n", "## Requirements\n", "\n", - "FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [openai] option:\n", + "FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [autogen] option:\n", "```bash\n", "pip install flaml[openai]==1.2.4\n", "```" From 04b2cfbd3c824d63eff8fc136275d73c1688e4f0 Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Sat, 27 May 2023 14:23:10 -0400 Subject: [PATCH 20/23] Update notebook/autogen_agent.ipynb Co-authored-by: Chi Wang --- notebook/autogen_agent.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index de49b9cd70..93e1158174 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -36,7 +36,7 @@ }, "outputs": [], "source": [ - "# %pip install flaml[openai]==1.2.4" + "# %pip install flaml[autogen]" ] }, { From d9aebc67f2acb30a8b1d8c216e09fc61265a583e Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Sat, 27 May 2023 14:23:18 -0400 Subject: [PATCH 21/23] Update notebook/autogen_agent.ipynb Co-authored-by: Chi Wang --- notebook/autogen_agent.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent.ipynb index 93e1158174..52fd85bfc6 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent.ipynb @@ -19,7 +19,7 @@ "\n", "FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [autogen] option:\n", "```bash\n", - "pip install flaml[openai]==1.2.4\n", + "pip install flaml[autogen]\n", "```" ] }, From 6a6be61f503d8f1f7b82bbc65d854bc9d288a2fe Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Sat, 27 May 2023 15:46:02 -0400 Subject: [PATCH 22/23] renaming and doc format --- ...b => autogen_agent_auto_feedback_from_code_execution.ipynb} | 2 +- website/docs/Use-Cases/Auto-Generation.md | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) rename notebook/{autogen_agent.ipynb => autogen_agent_auto_feedback_from_code_execution.ipynb} (96%) diff --git a/notebook/autogen_agent.ipynb b/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb similarity index 96% rename from notebook/autogen_agent.ipynb rename to notebook/autogen_agent_auto_feedback_from_code_execution.ipynb index 52fd85bfc6..93c0ae30dd 100644 --- a/notebook/autogen_agent.ipynb +++ b/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb @@ -13,7 +13,7 @@ "\n", "FLAML offers an experimental feature of interactive LLM agents, which can be used to solve various tasks, including coding and math problem-solving.\n", "\n", - "In this notebook, we demonstrate how to use `PythonAgent` and `UserProxyAgent` to write code and execute the code. Here `PythonAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `PythonAgent`. By setting `user_interaction_mode` properly, the `UserProxyAgent` can also prompt the user for feedback to `PythonAgent`. For example, when `user_interaction_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `PythonAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `PythonAgent` directly and returns the execution results (success or failure and corresponding outputs) to `PythonAgent`.\n", + "In this notebook, we demonstrate how to use `PythonAgent` and `UserProxyAgent` to write code and execute the code. Here `PythonAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `PythonAgent`, or automatically execute the code. Depending on the setting of `user_interaction_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or uses auto-feedback based on the result of code execution. For example, when `user_interaction_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `PythonAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `PythonAgent` directly and returns the execution results (success or failure and corresponding outputs) to `PythonAgent`.\n", "\n", "## Requirements\n", "\n", diff --git a/website/docs/Use-Cases/Auto-Generation.md b/website/docs/Use-Cases/Auto-Generation.md index 91fc11d9ef..57f191b238 100644 --- a/website/docs/Use-Cases/Auto-Generation.md +++ b/website/docs/Use-Cases/Auto-Generation.md @@ -380,7 +380,8 @@ The compact history is more efficient and the individual API call history contai [`flaml.autogen.agents`](../reference/autogen/agent/agent) contains an experimental implementation of interactive agents which can adapt to human or simulated feedback. This subpackage is under active development. -Find a notebook example on how to [use agents in FLAML to solve coding tasks](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent.ipynb). +*Interested in trying it yourself? Please check the following notebook example:* +* [Use agents in FLAML to perform tasks with code](https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb) ## Utilities for Applications From fadb9cec782d2bb8ea2ae20b06487bf0bee9090a Mon Sep 17 00:00:00 2001 From: Qingyun Wu Date: Sat, 27 May 2023 15:47:27 -0400 Subject: [PATCH 23/23] typo --- notebook/autogen_agent_auto_feedback_from_code_execution.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb b/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb index 93c0ae30dd..ad817c0f2c 100644 --- a/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb +++ b/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb @@ -13,7 +13,7 @@ "\n", "FLAML offers an experimental feature of interactive LLM agents, which can be used to solve various tasks, including coding and math problem-solving.\n", "\n", - "In this notebook, we demonstrate how to use `PythonAgent` and `UserProxyAgent` to write code and execute the code. Here `PythonAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `PythonAgent`, or automatically execute the code. Depending on the setting of `user_interaction_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or uses auto-feedback based on the result of code execution. For example, when `user_interaction_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `PythonAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `PythonAgent` directly and returns the execution results (success or failure and corresponding outputs) to `PythonAgent`.\n", + "In this notebook, we demonstrate how to use `PythonAgent` and `UserProxyAgent` to write code and execute the code. Here `PythonAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `PythonAgent`, or automatically execute the code. Depending on the setting of `user_interaction_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or uses auto-feedback based on the result of code execution. For example, when `user_interaction_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `PythonAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `PythonAgent` directly and return the execution results (success or failure and corresponding outputs) to `PythonAgent`.\n", "\n", "## Requirements\n", "\n",