diff --git a/.devcontainer/README.md b/.devcontainer/README.md index 8ae045f27d1..c56d9d56cc0 100644 --- a/.devcontainer/README.md +++ b/.devcontainer/README.md @@ -49,7 +49,7 @@ Feel free to modify these Dockerfiles for your specific project needs. Here are - **Setting Environment Variables**: Add environment variables using the `ENV` command for any application-specific configurations. We have prestaged the line needed to inject your OpenAI_key into the docker environment as a environmental variable. Others can be staged in the same way. Just uncomment the line. `# ENV OPENAI_API_KEY="{OpenAI-API-Key}"` to `ENV OPENAI_API_KEY="{OpenAI-API-Key}"` - **Need a less "Advanced" Autogen build**: If the `./full/Dockerfile` is to much but you need more than advanced then update this line in the Dockerfile file. -`RUN pip install pyautogen[teachable,lmm,retrievechat,mathchat,blendsearch] autogenra` to install just what you need. `RUN pip install pyautogen[retrievechat,blendsearch] autogenra` +`RUN pip install autogen-agentchat[teachable,lmm,retrievechat,mathchat,blendsearch]~=0.2 autogenra` to install just what you need. `RUN pip install autogen-agentchat[retrievechat,blendsearch]~=0.2 autogenra` - **Can't Dev without your favorite CLI tool**: if you need particular OS tools to be installed in your Docker container you can add those packages here right after the sudo for the `./base/Dockerfile` and `./full/Dockerfile` files. In the example below we are installing net-tools and vim to the environment. ```code diff --git a/.devcontainer/full/Dockerfile b/.devcontainer/full/Dockerfile index 0787ad24027..525dd3978d4 100644 --- a/.devcontainer/full/Dockerfile +++ b/.devcontainer/full/Dockerfile @@ -22,7 +22,7 @@ WORKDIR /home/autogen # Install Python packages RUN pip install --upgrade pip -RUN pip install pyautogen[teachable,lmm,retrievechat,mathchat,blendsearch] autogenra +RUN pip install autogen-agentchat[teachable,lmm,retrievechat,mathchat,blendsearch]~=0.2 autogenra RUN pip install numpy pandas matplotlib seaborn scikit-learn requests urllib3 nltk pillow pytest beautifulsoup4 # Expose port diff --git a/.github/workflows/python-package.yml b/.github/workflows/python-package.yml index f2967c13f5f..94edf117de2 100644 --- a/.github/workflows/python-package.yml +++ b/.github/workflows/python-package.yml @@ -49,8 +49,5 @@ jobs: pip install twine python setup.py sdist bdist_wheel - name: Publish to PyPI - env: - TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }} - TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }} shell: pwsh run: twine upload dist/* diff --git a/OAI_CONFIG_LIST_sample b/OAI_CONFIG_LIST_sample index c1711acd7c6..7cb370fd515 100644 --- a/OAI_CONFIG_LIST_sample +++ b/OAI_CONFIG_LIST_sample @@ -1,5 +1,4 @@ // Please modify the content, remove these four lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code. -// If using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 14 and line 21 below). Use "pip list" to check version of pyautogen installed. // // NOTE: This configuration lists GPT-4 as the default model, as this represents our current recommendation, and is known to work well with AutoGen. If you use a model other than GPT-4, you may need to revise various system prompts (especially if using weaker models like GPT-3.5-turbo). Moreover, if you use models other than those hosted by OpenAI or Azure, you may incur additional risks related to alignment and safety. Proceed with caution if updating this default. [ diff --git a/README.md b/README.md index 8595bb60506..bc6242ba599 100644 --- a/README.md +++ b/README.md @@ -5,12 +5,9 @@ AutoGen Logo -![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue) [![PyPI version](https://img.shields.io/badge/PyPI-v0.2.34-blue.svg)](https://pypi.org/project/pyautogen/) +![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue) [![PyPI - Version](https://img.shields.io/pypi/v/autogen-agentchat)](https://pypi.org/project/autogen-agentchat/) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) -[![Downloads](https://static.pepy.tech/badge/pyautogen/week)](https://pepy.tech/project/pyautogen) -[![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://aka.ms/autogen-dc) - [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen) @@ -20,6 +17,10 @@ AutoGen is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AutoGen aims to streamline the development and research of agentic AI, much like PyTorch does for Deep Learning. It offers features such as agents capable of interacting with each other, facilitates the use of various large language models (LLMs) and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns. > [!IMPORTANT] +> In order to better align with a new multi-packaging structure we have coming very soon, AutoGen is now available on PyPi as [`autogen-agentchat`](https://pypi.org/project/autogen-agentchat/) as of version `0.2.36`. This is the official package for the AutoGen project. + + +> [!NOTE] > *Note for contributors and users*: [microsoft/autogen](https://aka.ms/autogen-gh) is the official repository of AutoGen project and it is under active development and maintenance under MIT license. We welcome contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. We acknowledge the invaluable contributions from our existing contributors, as listed in [contributors.md](./CONTRIBUTORS.md). Whether you are an individual contributor or represent an organization, we invite you to join us in shaping the future of this project. For further information please also see [Microsoft open-source contributing guidelines](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing). > > -_Maintainers (Sept 6th, 2024)_ @@ -135,14 +136,14 @@ Find detailed instructions for users [here](https://microsoft.github.io/autogen/ AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. Find more options in [Installation](https://microsoft.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment). diff --git a/autogen/agentchat/contrib/capabilities/text_compressors.py b/autogen/agentchat/contrib/capabilities/text_compressors.py index 78554bdc935..fd203c35fca 100644 --- a/autogen/agentchat/contrib/capabilities/text_compressors.py +++ b/autogen/agentchat/contrib/capabilities/text_compressors.py @@ -5,7 +5,7 @@ import llmlingua except ImportError: IMPORT_ERROR = ImportError( - "LLMLingua is not installed. Please install it with `pip install pyautogen[long-context]`" + "LLMLingua is not installed. Please install it with `pip install autogen-agentchat[long-context]~=0.2`" ) PromptCompressor = object else: diff --git a/autogen/agentchat/contrib/retrieve_user_proxy_agent.py b/autogen/agentchat/contrib/retrieve_user_proxy_agent.py index b247d7a158f..ee8f74bb9a6 100644 --- a/autogen/agentchat/contrib/retrieve_user_proxy_agent.py +++ b/autogen/agentchat/contrib/retrieve_user_proxy_agent.py @@ -9,7 +9,9 @@ try: import chromadb except ImportError as e: - raise ImportError(f"{e}. You can try `pip install pyautogen[retrievechat]`, or install `chromadb` manually.") + raise ImportError( + f"{e}. You can try `pip install autogen-agentchat[retrievechat]~=0.2`, or install `chromadb` manually." + ) from autogen.agentchat import UserProxyAgent from autogen.agentchat.agent import Agent from autogen.agentchat.contrib.vectordb.base import Document, QueryResults, VectorDB, VectorDBFactory diff --git a/autogen/version.py b/autogen/version.py index 9b1b78b4b3a..c971add6528 100644 --- a/autogen/version.py +++ b/autogen/version.py @@ -1 +1 @@ -__version__ = "0.2.35" +__version__ = "0.2.36" diff --git a/notebook/Async_human_input.ipynb b/notebook/Async_human_input.ipynb index 07459b4a86b..5d4926bf13c 100644 --- a/notebook/Async_human_input.ipynb +++ b/notebook/Async_human_input.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" @@ -10,179 +10,9 @@ "id": "tLIs1YRdr8jM", "outputId": "909c1c70-1a22-4e9d-b7f4-a40e2d737fb0" }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: pyautogen>=0.2.3 in /home/vscode/.local/lib/python3.10/site-packages (0.2.3)\n", - "Requirement already satisfied: openai>=1.3 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.6.1)\n", - "Requirement already satisfied: diskcache in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (5.6.3)\n", - "Requirement already satisfied: termcolor in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.4.0)\n", - "Requirement already satisfied: flaml in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.1.1)\n", - "Requirement already satisfied: python-dotenv in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.0.0)\n", - "Requirement already satisfied: tiktoken in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (0.5.2)\n", - "Requirement already satisfied: pydantic<3,>=1.10 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.10.9)\n", - "Requirement already satisfied: anyio<5,>=3.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.2.0)\n", - "Requirement already satisfied: distro<2,>=1.7.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.9.0)\n", - "Requirement already satisfied: httpx<1,>=0.23.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (0.26.0)\n", - "Requirement already satisfied: sniffio in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.3.0)\n", - "Requirement already satisfied: tqdm>4 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.66.1)\n", - "Requirement already satisfied: typing-extensions<5,>=4.7 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.9.0)\n", - "Requirement already satisfied: NumPy>=1.17.0rc1 in /home/vscode/.local/lib/python3.10/site-packages (from flaml->pyautogen>=0.2.3) (1.26.3)\n", - "Requirement already satisfied: regex>=2022.1.18 in /home/vscode/.local/lib/python3.10/site-packages (from tiktoken->pyautogen>=0.2.3) (2023.12.25)\n", - "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.10/site-packages (from tiktoken->pyautogen>=0.2.3) (2.31.0)\n", - "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen>=0.2.3) (3.6)\n", - "Requirement already satisfied: exceptiongroup>=1.0.2 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen>=0.2.3) (1.2.0)\n", - "Requirement already satisfied: certifi in /usr/local/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (2023.11.17)\n", - "Requirement already satisfied: httpcore==1.* in /home/vscode/.local/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (1.0.2)\n", - "Requirement already satisfied: h11<0.15,>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (0.14.0)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken->pyautogen>=0.2.3) (3.3.2)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /home/vscode/.local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken->pyautogen>=0.2.3) (1.26.18)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: chromadb in /home/vscode/.local/lib/python3.10/site-packages (0.4.22)\n", - "Requirement already satisfied: build>=1.0.3 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.0.3)\n", - "Requirement already satisfied: requests>=2.28 in /usr/local/lib/python3.10/site-packages (from chromadb) (2.31.0)\n", - "Requirement already satisfied: pydantic>=1.9 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.10.9)\n", - "Requirement already satisfied: chroma-hnswlib==0.7.3 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.7.3)\n", - "Requirement already satisfied: fastapi>=0.95.2 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.108.0)\n", - "Requirement already satisfied: uvicorn>=0.18.3 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.25.0)\n", - "Requirement already satisfied: numpy>=1.22.5 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.26.3)\n", - "Requirement already satisfied: posthog>=2.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (3.1.0)\n", - "Requirement already satisfied: typing-extensions>=4.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.9.0)\n", - "Requirement already satisfied: pulsar-client>=3.1.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (3.4.0)\n", - "Requirement already satisfied: onnxruntime>=1.14.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.16.3)\n", - "Requirement already satisfied: opentelemetry-api>=1.2.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-exporter-otlp-proto-grpc>=1.2.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-instrumentation-fastapi>=0.41b0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-sdk>=1.2.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.22.0)\n", - "Requirement already satisfied: tokenizers>=0.13.2 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.15.0)\n", - "Requirement already satisfied: pypika>=0.48.9 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.48.9)\n", - "Requirement already satisfied: tqdm>=4.65.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.66.1)\n", - "Requirement already satisfied: overrides>=7.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (7.4.0)\n", - "Requirement already satisfied: importlib-resources in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (6.1.1)\n", - "Requirement already satisfied: grpcio>=1.58.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (1.60.0)\n", - "Requirement already satisfied: bcrypt>=4.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.1.2)\n", - "Requirement already satisfied: typer>=0.9.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (0.9.0)\n", - "Requirement already satisfied: kubernetes>=28.1.0 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (28.1.0)\n", - "Requirement already satisfied: tenacity>=8.2.3 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (8.2.3)\n", - "Requirement already satisfied: PyYAML>=6.0.0 in /usr/local/lib/python3.10/site-packages (from chromadb) (6.0.1)\n", - "Requirement already satisfied: mmh3>=4.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from chromadb) (4.0.1)\n", - "Requirement already satisfied: packaging>=19.0 in /usr/local/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (23.2)\n", - "Requirement already satisfied: pyproject_hooks in /home/vscode/.local/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (1.0.0)\n", - "Requirement already satisfied: tomli>=1.1.0 in /usr/local/lib/python3.10/site-packages (from build>=1.0.3->chromadb) (2.0.1)\n", - "Requirement already satisfied: starlette<0.33.0,>=0.29.0 in /home/vscode/.local/lib/python3.10/site-packages (from fastapi>=0.95.2->chromadb) (0.32.0.post1)\n", - "Requirement already satisfied: certifi>=14.05.14 in /usr/local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2023.11.17)\n", - "Requirement already satisfied: six>=1.9.0 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.16.0)\n", - "Requirement already satisfied: python-dateutil>=2.5.3 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2.8.2)\n", - "Requirement already satisfied: google-auth>=1.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (2.26.1)\n", - "Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.7.0)\n", - "Requirement already satisfied: requests-oauthlib in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.3.1)\n", - "Requirement already satisfied: oauthlib>=3.2.2 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (3.2.2)\n", - "Requirement already satisfied: urllib3<2.0,>=1.24.2 in /home/vscode/.local/lib/python3.10/site-packages (from kubernetes>=28.1.0->chromadb) (1.26.18)\n", - "Requirement already satisfied: coloredlogs in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (15.0.1)\n", - "Requirement already satisfied: flatbuffers in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (23.5.26)\n", - "Requirement already satisfied: protobuf in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (4.25.1)\n", - "Requirement already satisfied: sympy in /home/vscode/.local/lib/python3.10/site-packages (from onnxruntime>=1.14.1->chromadb) (1.12)\n", - "Requirement already satisfied: deprecated>=1.2.6 in /usr/local/lib/python3.10/site-packages (from opentelemetry-api>=1.2.0->chromadb) (1.2.14)\n", - "Requirement already satisfied: importlib-metadata<7.0,>=6.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-api>=1.2.0->chromadb) (6.11.0)\n", - "Requirement already satisfied: backoff<3.0.0,>=1.10.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (2.2.1)\n", - "Requirement already satisfied: googleapis-common-protos~=1.52 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (1.62.0)\n", - "Requirement already satisfied: opentelemetry-exporter-otlp-proto-common==1.22.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-proto==1.22.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-exporter-otlp-proto-grpc>=1.2.0->chromadb) (1.22.0)\n", - "Requirement already satisfied: opentelemetry-instrumentation-asgi==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-instrumentation==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-semantic-conventions==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: opentelemetry-util-http==0.43b0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (0.43b0)\n", - "Requirement already satisfied: setuptools>=16.0 in /usr/local/lib/python3.10/site-packages (from opentelemetry-instrumentation==0.43b0->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (69.0.2)\n", - "Requirement already satisfied: wrapt<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/site-packages (from opentelemetry-instrumentation==0.43b0->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (1.16.0)\n", - "Requirement already satisfied: asgiref~=3.0 in /home/vscode/.local/lib/python3.10/site-packages (from opentelemetry-instrumentation-asgi==0.43b0->opentelemetry-instrumentation-fastapi>=0.41b0->chromadb) (3.7.2)\n", - "Requirement already satisfied: monotonic>=1.5 in /home/vscode/.local/lib/python3.10/site-packages (from posthog>=2.4.0->chromadb) (1.6)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.28->chromadb) (3.3.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests>=2.28->chromadb) (3.6)\n", - "Requirement already satisfied: huggingface_hub<1.0,>=0.16.4 in /home/vscode/.local/lib/python3.10/site-packages (from tokenizers>=0.13.2->chromadb) (0.20.2)\n", - "Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.10/site-packages (from typer>=0.9.0->chromadb) (8.1.7)\n", - "Requirement already satisfied: h11>=0.8 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn>=0.18.3->uvicorn[standard]>=0.18.3->chromadb) (0.14.0)\n", - "Requirement already satisfied: httptools>=0.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.6.1)\n", - "Requirement already satisfied: python-dotenv>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (1.0.0)\n", - "Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.19.0)\n", - "Requirement already satisfied: watchfiles>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.21.0)\n", - "Requirement already satisfied: websockets>=10.4 in /home/vscode/.local/lib/python3.10/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (12.0)\n", - "Requirement already satisfied: cachetools<6.0,>=2.0.0 in /home/vscode/.local/lib/python3.10/site-packages (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (5.3.2)\n", - "Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/vscode/.local/lib/python3.10/site-packages (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (0.3.0)\n", - "Requirement already satisfied: rsa<5,>=3.1.4 in /home/vscode/.local/lib/python3.10/site-packages (from google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (4.9)\n", - "Requirement already satisfied: filelock in /home/vscode/.local/lib/python3.10/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb) (3.13.1)\n", - "Requirement already satisfied: fsspec>=2023.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from huggingface_hub<1.0,>=0.16.4->tokenizers>=0.13.2->chromadb) (2023.12.2)\n", - "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.10/site-packages (from importlib-metadata<7.0,>=6.0->opentelemetry-api>=1.2.0->chromadb) (3.17.0)\n", - "Requirement already satisfied: anyio<5,>=3.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from starlette<0.33.0,>=0.29.0->fastapi>=0.95.2->chromadb) (4.2.0)\n", - "Requirement already satisfied: humanfriendly>=9.1 in /home/vscode/.local/lib/python3.10/site-packages (from coloredlogs->onnxruntime>=1.14.1->chromadb) (10.0)\n", - "Requirement already satisfied: mpmath>=0.19 in /home/vscode/.local/lib/python3.10/site-packages (from sympy->onnxruntime>=1.14.1->chromadb) (1.3.0)\n", - "Requirement already satisfied: sniffio>=1.1 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.4.0->starlette<0.33.0,>=0.29.0->fastapi>=0.95.2->chromadb) (1.3.0)\n", - "Requirement already satisfied: exceptiongroup>=1.0.2 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.4.0->starlette<0.33.0,>=0.29.0->fastapi>=0.95.2->chromadb) (1.2.0)\n", - "Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /home/vscode/.local/lib/python3.10/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes>=28.1.0->chromadb) (0.5.1)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: sentence_transformers in /home/vscode/.local/lib/python3.10/site-packages (2.2.2)\n", - "Requirement already satisfied: transformers<5.0.0,>=4.6.0 in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (4.36.2)\n", - "Requirement already satisfied: tqdm in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (4.66.1)\n", - "Requirement already satisfied: torch>=1.6.0 in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (2.1.2)\n", - "Requirement already satisfied: torchvision in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (0.16.2)\n", - "Requirement already satisfied: numpy in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (1.26.3)\n", - "Requirement already satisfied: scikit-learn in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (1.3.2)\n", - "Requirement already satisfied: scipy in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (1.11.4)\n", - "Requirement already satisfied: nltk in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (3.8.1)\n", - "Requirement already satisfied: sentencepiece in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (0.1.99)\n", - "Requirement already satisfied: huggingface-hub>=0.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from sentence_transformers) (0.20.2)\n", - "Requirement already satisfied: filelock in /home/vscode/.local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (3.13.1)\n", - "Requirement already satisfied: fsspec>=2023.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (2023.12.2)\n", - "Requirement already satisfied: requests in /usr/local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (2.31.0)\n", - "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (6.0.1)\n", - "Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/vscode/.local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (4.9.0)\n", - "Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.10/site-packages (from huggingface-hub>=0.4.0->sentence_transformers) (23.2)\n", - "Requirement already satisfied: sympy in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (1.12)\n", - "Requirement already satisfied: networkx in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (3.2.1)\n", - "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (3.1.2)\n", - "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (8.9.2.26)\n", - "Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.3.1)\n", - "Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (11.0.2.54)\n", - "Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (10.3.2.106)\n", - "Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (11.4.5.107)\n", - "Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.0.106)\n", - "Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (2.18.1)\n", - "Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (12.1.105)\n", - "Requirement already satisfied: triton==2.1.0 in /home/vscode/.local/lib/python3.10/site-packages (from torch>=1.6.0->sentence_transformers) (2.1.0)\n", - "Requirement already satisfied: nvidia-nvjitlink-cu12 in /home/vscode/.local/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch>=1.6.0->sentence_transformers) (12.3.101)\n", - "Requirement already satisfied: regex!=2019.12.17 in /home/vscode/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence_transformers) (2023.12.25)\n", - "Requirement already satisfied: tokenizers<0.19,>=0.14 in /home/vscode/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence_transformers) (0.15.0)\n", - "Requirement already satisfied: safetensors>=0.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.6.0->sentence_transformers) (0.4.1)\n", - "Requirement already satisfied: click in /usr/local/lib/python3.10/site-packages (from nltk->sentence_transformers) (8.1.7)\n", - "Requirement already satisfied: joblib in /home/vscode/.local/lib/python3.10/site-packages (from nltk->sentence_transformers) (1.3.2)\n", - "Requirement already satisfied: threadpoolctl>=2.0.0 in /home/vscode/.local/lib/python3.10/site-packages (from scikit-learn->sentence_transformers) (3.2.0)\n", - "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /home/vscode/.local/lib/python3.10/site-packages (from torchvision->sentence_transformers) (10.2.0)\n", - "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/site-packages (from jinja2->torch>=1.6.0->sentence_transformers) (2.1.3)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (3.3.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (3.6)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /home/vscode/.local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (1.26.18)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/site-packages (from requests->huggingface-hub>=0.4.0->sentence_transformers) (2023.11.17)\n", - "Requirement already satisfied: mpmath>=0.19 in /home/vscode/.local/lib/python3.10/site-packages (from sympy->torch>=1.6.0->sentence_transformers) (1.3.0)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: tiktoken in /home/vscode/.local/lib/python3.10/site-packages (0.5.2)\n", - "Requirement already satisfied: regex>=2022.1.18 in /home/vscode/.local/lib/python3.10/site-packages (from tiktoken) (2023.12.25)\n", - "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.10/site-packages (from tiktoken) (2.31.0)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (3.3.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (3.6)\n", - "Requirement already satisfied: urllib3<3,>=1.21.1 in /home/vscode/.local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (1.26.18)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->tiktoken) (2023.11.17)\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: pypdf in /home/vscode/.local/lib/python3.10/site-packages (3.17.4)\n" - ] - } - ], + "outputs": [], "source": [ - "!pip install \"pyautogen>=0.2.3\"\n", + "!pip install \"autogen-agentchat~=0.2\"\n", "!pip install chromadb\n", "!pip install sentence_transformers\n", "!pip install tiktoken\n", diff --git a/notebook/JSON_mode_example.ipynb b/notebook/JSON_mode_example.ipynb index c4b65c4d9f4..eb09f51983e 100644 --- a/notebook/JSON_mode_example.ipynb +++ b/notebook/JSON_mode_example.ipynb @@ -29,7 +29,7 @@ "JSON mode is a feature of OpenAI API, however strong models (such as Claude 3 Opus), can generate appropriate json as well.\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -40,7 +40,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "# %pip install \"pyautogen>=0.2.3\"\n", + "# %pip install \"autogen-agentchat~=0.2.3\"\n", "\n", "# In Your OAI_CONFIG_LIST file, you must have two configs,\n", "# one with: \"response_format\": { \"type\": \"text\" }\n", diff --git a/notebook/agentchat_MathChat.ipynb b/notebook/agentchat_MathChat.ipynb index db7c6594d99..69c38031b2f 100644 --- a/notebook/agentchat_MathChat.ipynb +++ b/notebook/agentchat_MathChat.ipynb @@ -24,7 +24,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[mathchat]\n", + "pip install autogen-agentchat[mathchat]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_RetrieveChat.ipynb b/notebook/agentchat_RetrieveChat.ipynb index 0b829835a0a..6ca2d1ac512 100644 --- a/notebook/agentchat_RetrieveChat.ipynb +++ b/notebook/agentchat_RetrieveChat.ipynb @@ -28,7 +28,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat] flaml[automl]\n", + "pip install autogen-agentchat[retrievechat]~=0.2 flaml[automl]\n", "```\n", "\n", "*You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).*\n", diff --git a/notebook/agentchat_RetrieveChat_mongodb.ipynb b/notebook/agentchat_RetrieveChat_mongodb.ipynb index 09c3c44bef2..f1f85f65a80 100644 --- a/notebook/agentchat_RetrieveChat_mongodb.ipynb +++ b/notebook/agentchat_RetrieveChat_mongodb.ipynb @@ -22,7 +22,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat-mongodb] flaml[automl]\n", + "pip install autogen-agentchat[retrievechat-mongodb]~=0.2 flaml[automl]\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_RetrieveChat_pgvector.ipynb b/notebook/agentchat_RetrieveChat_pgvector.ipynb index 4d9dd44c33d..022b1347a2d 100644 --- a/notebook/agentchat_RetrieveChat_pgvector.ipynb +++ b/notebook/agentchat_RetrieveChat_pgvector.ipynb @@ -24,7 +24,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat-pgvector] flaml[automl]\n", + "pip install autogen-agentchat[retrievechat-pgvector]~=0.2 flaml[automl]\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_RetrieveChat_qdrant.ipynb b/notebook/agentchat_RetrieveChat_qdrant.ipynb index 0035a8e3081..9be4cbfe528 100644 --- a/notebook/agentchat_RetrieveChat_qdrant.ipynb +++ b/notebook/agentchat_RetrieveChat_qdrant.ipynb @@ -21,7 +21,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install \"pyautogen[retrievechat-qdrant]\" \"flaml[automl]\"\n", + "pip install \"autogen-agentchat[retrievechat-qdrant]~=0.2\" \"flaml[automl]\"\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -43,7 +43,7 @@ } ], "source": [ - "%pip install \"pyautogen[retrievechat-qdrant]\" \"flaml[automl]\" -q" + "%pip install \"autogen-agentchat[retrievechat-qdrant]~=0.2\" \"flaml[automl]\" -q" ] }, { diff --git a/notebook/agentchat_agentops.ipynb b/notebook/agentchat_agentops.ipynb index 71106e45d3c..7c5e09c9cc5 100644 --- a/notebook/agentchat_agentops.ipynb +++ b/notebook/agentchat_agentops.ipynb @@ -55,7 +55,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen agentops\n", + "pip install autogen-agentchat~=0.2 agentops\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_auto_feedback_from_code_execution.ipynb b/notebook/agentchat_auto_feedback_from_code_execution.ipynb index 51b5a591734..31169e1c7a0 100644 --- a/notebook/agentchat_auto_feedback_from_code_execution.ipynb +++ b/notebook/agentchat_auto_feedback_from_code_execution.ipynb @@ -16,7 +16,7 @@ ":::info Requirements\n", "Install the following packages before running the code below:\n", "```bash\n", - "pip install pyautogen matplotlib yfinance\n", + "pip install autogen-agentchat~=0.2 matplotlib yfinance\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_azr_ai_search.ipynb b/notebook/agentchat_azr_ai_search.ipynb index f4521f60d27..3632a1fd87a 100644 --- a/notebook/agentchat_azr_ai_search.ipynb +++ b/notebook/agentchat_azr_ai_search.ipynb @@ -84,9 +84,8 @@ "metadata": {}, "outputs": [], "source": [ - "!pip3 install pyautogen==0.2.16\n", + "!pip3 install autogen-agentchat[graph]~=0.2\n", "!pip3 install python-dotenv==1.0.1\n", - "!pip3 install pyautogen[graph]>=0.2.11\n", "!pip3 install azure-search-documents==11.4.0b8\n", "!pip3 install azure-identity==1.12.0" ] diff --git a/notebook/agentchat_cost_token_tracking.ipynb b/notebook/agentchat_cost_token_tracking.ipynb index a60fd6de15e..17106e7c938 100644 --- a/notebook/agentchat_cost_token_tracking.ipynb +++ b/notebook/agentchat_cost_token_tracking.ipynb @@ -54,7 +54,7 @@ "\n", "AutoGen requires `Python>=3.8`:\n", "```bash\n", - "pip install \"pyautogen\"\n", + "pip install \"autogen-agentchat~=0.2\"\n", "```" ] }, diff --git a/notebook/agentchat_custom_model.ipynb b/notebook/agentchat_custom_model.ipynb index 5097713a092..773247ee0b9 100644 --- a/notebook/agentchat_custom_model.ipynb +++ b/notebook/agentchat_custom_model.ipynb @@ -22,7 +22,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen torch transformers sentencepiece\n", + "pip install autogen-agentchat~=0.2 torch transformers sentencepiece\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_dalle_and_gpt4v.ipynb b/notebook/agentchat_dalle_and_gpt4v.ipynb index e07578016a9..afc4524734a 100644 --- a/notebook/agentchat_dalle_and_gpt4v.ipynb +++ b/notebook/agentchat_dalle_and_gpt4v.ipynb @@ -17,7 +17,7 @@ "source": [ "### Before everything starts, install AutoGen with the `lmm` option\n", "```bash\n", - "pip install \"pyautogen[lmm]>=0.2.3\"\n", + "pip install \"autogen-agentchat[lmm]~=0.2\"\n", "```" ] }, diff --git a/notebook/agentchat_databricks_dbrx.ipynb b/notebook/agentchat_databricks_dbrx.ipynb index 12d40a37db1..c063906ea16 100644 --- a/notebook/agentchat_databricks_dbrx.ipynb +++ b/notebook/agentchat_databricks_dbrx.ipynb @@ -15,7 +15,7 @@ "This notebook will demonstrate a few basic examples of Autogen with DBRX, including the use of `AssistantAgent`, `UserProxyAgent`, and `ConversableAgent`. These demos are not intended to be exhaustive - feel free to use them as a base to build upon!\n", "\n", "## Requirements\n", - "AutoGen must be installed on your Databricks cluster, and requires `Python>=3.8`. This example includes the `%pip` magic command to install: `%pip install pyautogen`, as well as other necessary libraries. \n", + "AutoGen must be installed on your Databricks cluster, and requires `Python>=3.8`. This example includes the `%pip` magic command to install: `%pip install autogen-agentchat~=0.2`, as well as other necessary libraries. \n", "\n", "This code has been tested on: \n", "* [Serverless Notebooks](https://docs.databricks.com/en/compute/serverless.html) (in public preview as of Apr 18, 2024)\n", @@ -47,13 +47,11 @@ { "name": "stdout", "output_type": "stream", - "text": [ - "" - ] + "text": [] } ], "source": [ - "%pip install pyautogen==0.2.25 openai==1.21.2 typing_extensions==4.11.0 --upgrade" + "%pip install autogen-agentchat~=0.2.25 openai==1.21.2 typing_extensions==4.11.0 --upgrade" ] }, { diff --git a/notebook/agentchat_function_call.ipynb b/notebook/agentchat_function_call.ipynb index 2a173c8e269..ff94c0d4fb0 100644 --- a/notebook/agentchat_function_call.ipynb +++ b/notebook/agentchat_function_call.ipynb @@ -23,9 +23,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install `pyautogen`:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -36,7 +36,7 @@ "metadata": {}, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_function_call_async.ipynb b/notebook/agentchat_function_call_async.ipynb index 57233547ebc..e0e24af9232 100644 --- a/notebook/agentchat_function_call_async.ipynb +++ b/notebook/agentchat_function_call_async.ipynb @@ -20,9 +20,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_function_call_code_writing.ipynb b/notebook/agentchat_function_call_code_writing.ipynb index 92074e4821b..924592bbdec 100644 --- a/notebook/agentchat_function_call_code_writing.ipynb +++ b/notebook/agentchat_function_call_code_writing.ipynb @@ -28,7 +28,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install pyautogen" + "! pip install autogen-agentchat~=0.2" ] }, { diff --git a/notebook/agentchat_function_call_currency_calculator.ipynb b/notebook/agentchat_function_call_currency_calculator.ipynb index 34ff92ff91a..36ef81d5edb 100644 --- a/notebook/agentchat_function_call_currency_calculator.ipynb +++ b/notebook/agentchat_function_call_currency_calculator.ipynb @@ -21,9 +21,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install `pyautogen`:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -34,7 +34,7 @@ "metadata": {}, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_function_call_with_composio.ipynb b/notebook/agentchat_function_call_with_composio.ipynb index 001c56960fc..1eea9e908fc 100644 --- a/notebook/agentchat_function_call_with_composio.ipynb +++ b/notebook/agentchat_function_call_with_composio.ipynb @@ -60,7 +60,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen composio-autogen\n", + "pip install autogen-agentchat~=0.2 composio-autogen\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb b/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb index aea134907b7..3e0c721f65e 100644 --- a/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb +++ b/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb @@ -26,7 +26,7 @@ "metadata": {}, "outputs": [], "source": [ - "%pip install pyautogen llama-index llama-index-tools-wikipedia llama-index-readers-wikipedia wikipedia" + "%pip install autogen-agentchat~=0.2 llama-index llama-index-tools-wikipedia llama-index-readers-wikipedia wikipedia" ] }, { diff --git a/notebook/agentchat_groupchat.ipynb b/notebook/agentchat_groupchat.ipynb index d2c061d1410..925c7124a29 100644 --- a/notebook/agentchat_groupchat.ipynb +++ b/notebook/agentchat_groupchat.ipynb @@ -14,9 +14,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -218,8 +218,11 @@ ], "metadata": { "front_matter": { - "tags": ["orchestration", "group chat"], - "description": "Explore the utilization of large language models in automated group chat scenarios, where agents perform tasks collectively, demonstrating how they can be configured, interact with each other, and retrieve specific information from external resources." + "description": "Explore the utilization of large language models in automated group chat scenarios, where agents perform tasks collectively, demonstrating how they can be configured, interact with each other, and retrieve specific information from external resources.", + "tags": [ + "orchestration", + "group chat" + ] }, "kernelspec": { "display_name": "flaml", diff --git a/notebook/agentchat_groupchat_RAG.ipynb b/notebook/agentchat_groupchat_RAG.ipynb index e18bd99c151..aeb4f714e68 100644 --- a/notebook/agentchat_groupchat_RAG.ipynb +++ b/notebook/agentchat_groupchat_RAG.ipynb @@ -15,7 +15,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[retrievechat]\n", + "pip install autogen-agentchat[retrievechat]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_groupchat_customized.ipynb b/notebook/agentchat_groupchat_customized.ipynb index dde124aef7d..3b85223a731 100644 --- a/notebook/agentchat_groupchat_customized.ipynb +++ b/notebook/agentchat_groupchat_customized.ipynb @@ -39,9 +39,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_groupchat_finite_state_machine.ipynb b/notebook/agentchat_groupchat_finite_state_machine.ipynb index 74b6f3d4047..cfe45662a8f 100644 --- a/notebook/agentchat_groupchat_finite_state_machine.ipynb +++ b/notebook/agentchat_groupchat_finite_state_machine.ipynb @@ -18,9 +18,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -35,7 +35,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "%pip install pyautogen[graph]>=0.2.11" + "%pip install autogen-agentchat[graph]~=0.2.11" ] }, { diff --git a/notebook/agentchat_groupchat_research.ipynb b/notebook/agentchat_groupchat_research.ipynb index c448ed8cb7a..6adf653f903 100644 --- a/notebook/agentchat_groupchat_research.ipynb +++ b/notebook/agentchat_groupchat_research.ipynb @@ -14,9 +14,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -515,8 +515,10 @@ ], "metadata": { "front_matter": { - "tags": ["group chat"], - "description": "Perform research using a group chat with a number of specialized agents" + "description": "Perform research using a group chat with a number of specialized agents", + "tags": [ + "group chat" + ] }, "kernelspec": { "display_name": "flaml", diff --git a/notebook/agentchat_groupchat_stateflow.ipynb b/notebook/agentchat_groupchat_stateflow.ipynb index 3081056eac9..6a7869ec540 100644 --- a/notebook/agentchat_groupchat_stateflow.ipynb +++ b/notebook/agentchat_groupchat_stateflow.ipynb @@ -12,9 +12,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_groupchat_vis.ipynb b/notebook/agentchat_groupchat_vis.ipynb index 29f968752ae..d660bb0f51e 100644 --- a/notebook/agentchat_groupchat_vis.ipynb +++ b/notebook/agentchat_groupchat_vis.ipynb @@ -12,9 +12,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -976,8 +976,10 @@ ], "metadata": { "front_matter": { - "tags": ["group chat"], - "description": "Explore a group chat example using agents such as a coder and visualization agent." + "description": "Explore a group chat example using agents such as a coder and visualization agent.", + "tags": [ + "group chat" + ] }, "kernelspec": { "display_name": "flaml", diff --git a/notebook/agentchat_human_feedback.ipynb b/notebook/agentchat_human_feedback.ipynb index 000d788d6a5..3c21c7c9f4f 100644 --- a/notebook/agentchat_human_feedback.ipynb +++ b/notebook/agentchat_human_feedback.ipynb @@ -28,7 +28,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -45,7 +45,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_image_generation_capability.ipynb b/notebook/agentchat_image_generation_capability.ipynb index b5d298d7f4d..d8a01fc3032 100644 --- a/notebook/agentchat_image_generation_capability.ipynb +++ b/notebook/agentchat_image_generation_capability.ipynb @@ -20,7 +20,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[lmm]\n", + "pip install autogen-agentchat[lmm]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_langchain.ipynb b/notebook/agentchat_langchain.ipynb index 83ab2df44c2..1d5fea19e06 100644 --- a/notebook/agentchat_langchain.ipynb +++ b/notebook/agentchat_langchain.ipynb @@ -28,9 +28,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install `pyautogen` and `Langchain`:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please install `autogen-agentchat` and `Langchain`:\n", "```bash\n", - "pip install pyautogen Langchain\n", + "pip install autogen-agentchat~=0.2 Langchain\n", "```" ] }, @@ -47,7 +47,7 @@ }, "outputs": [], "source": [ - "%pip install \"pyautogen>=0.2.3\" Langchain" + "%pip install \"autogen-agentchat~=0.2\" Langchain" ] }, { diff --git a/notebook/agentchat_lmm_gpt-4v.ipynb b/notebook/agentchat_lmm_gpt-4v.ipynb index 7c9e3ea125c..7136ac62e73 100644 --- a/notebook/agentchat_lmm_gpt-4v.ipynb +++ b/notebook/agentchat_lmm_gpt-4v.ipynb @@ -21,9 +21,9 @@ "source": [ "### Before everything starts, install AutoGen with the `lmm` option\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install \"pyautogen[lmm]>=0.2.17\"\n", + "pip install \"autogen-agentchat[lmm]~=0.2\"\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n" diff --git a/notebook/agentchat_lmm_llava.ipynb b/notebook/agentchat_lmm_llava.ipynb index 61efc74e00e..e5eb26362ec 100644 --- a/notebook/agentchat_lmm_llava.ipynb +++ b/notebook/agentchat_lmm_llava.ipynb @@ -26,7 +26,7 @@ "source": [ "### Before everything starts, install AutoGen with the `lmm` option\n", "```bash\n", - "pip install \"pyautogen[lmm]>=0.2.3\"\n", + "pip install \"autogen-agentchat[lmm]~=0.2\"\n", "```" ] }, diff --git a/notebook/agentchat_memory_using_mem0.ipynb b/notebook/agentchat_memory_using_mem0.ipynb index d590002164b..b433e05df41 100644 --- a/notebook/agentchat_memory_using_mem0.ipynb +++ b/notebook/agentchat_memory_using_mem0.ipynb @@ -17,7 +17,7 @@ "source": [ "This notebook demonstrates an intelligent customer service chatbot system that combines:\n", "\n", - "- PyAutoGen for conversational agents\n", + "- AutoGen for conversational agents\n", "- Mem0 for memory management\n", "\n", "[Mem0](https://www.mem0.ai/) provides a smart, self-improving memory layer for Large Language Models (LLMs), enabling developers to create personalized AI experiences that evolve with each user interaction. Refer [docs](https://docs.mem0.ai/overview) for more information.\n", @@ -50,7 +50,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen mem0ai\n", + "pip install autogen-agentchat~=0.2 mem0ai\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_microsoft_fabric.ipynb b/notebook/agentchat_microsoft_fabric.ipynb index 1aaee58f092..22500014678 100644 --- a/notebook/agentchat_microsoft_fabric.ipynb +++ b/notebook/agentchat_microsoft_fabric.ipynb @@ -225,8 +225,8 @@ "metadata": {}, "outputs": [], "source": [ - "# pyautogen>0.1.14 supports openai>=1\n", - "%pip install \"pyautogen>0.2\" \"openai>1\" -q" + "# autogen-agentchat>0.1.14 supports openai>=1\n", + "%pip install \"autogen-agentchat~=0.2\" \"openai>1\" -q" ] }, { @@ -418,7 +418,7 @@ }, "outputs": [], "source": [ - "%pip install \"pyautogen[retrievechat,lmm]>=0.2.28\" -q" + "%pip install \"autogen-agentchat[retrievechat,lmm]~=0.2\" -q" ] }, { diff --git a/notebook/agentchat_multi_task_async_chats.ipynb b/notebook/agentchat_multi_task_async_chats.ipynb index ad75618a546..86b22edd2be 100644 --- a/notebook/agentchat_multi_task_async_chats.ipynb +++ b/notebook/agentchat_multi_task_async_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_multi_task_chats.ipynb b/notebook/agentchat_multi_task_chats.ipynb index 2c200f52354..5defb22b13d 100644 --- a/notebook/agentchat_multi_task_chats.ipynb +++ b/notebook/agentchat_multi_task_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_nested_chats_chess.ipynb b/notebook/agentchat_nested_chats_chess.ipynb index b3e369fba8c..e5a22fce5a7 100644 --- a/notebook/agentchat_nested_chats_chess.ipynb +++ b/notebook/agentchat_nested_chats_chess.ipynb @@ -39,7 +39,7 @@ "source": [ "## Installation\n", "\n", - "First you need to install the `pyautogen` and `chess` packages to use AutoGen." + "First you need to install the `autogen-agentchat~=0.2` and `chess` packages to use AutoGen." ] }, { @@ -48,7 +48,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install -qqq pyautogen chess" + "! pip install -qqq autogen-agentchat~=0.2 chess" ] }, { diff --git a/notebook/agentchat_nested_chats_chess_altmodels.ipynb b/notebook/agentchat_nested_chats_chess_altmodels.ipynb index 69d3edbcfb5..8980a87e881 100644 --- a/notebook/agentchat_nested_chats_chess_altmodels.ipynb +++ b/notebook/agentchat_nested_chats_chess_altmodels.ipynb @@ -40,7 +40,7 @@ "source": [ "## Installation\n", "\n", - "First, you need to install the `pyautogen` and `chess` packages to use AutoGen. We'll include Anthropic and Together.AI libraries." + "First, you need to install the `autogen-agentchat~=0.2` and `chess` packages to use AutoGen. We'll include Anthropic and Together.AI libraries." ] }, { @@ -49,7 +49,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install -qqq pyautogen[anthropic,together] chess" + "! pip install -qqq autogen-agentchat[anthropic,together]~=0.2 chess" ] }, { diff --git a/notebook/agentchat_nested_sequential_chats.ipynb b/notebook/agentchat_nested_sequential_chats.ipynb index 3c4ca199484..eb27ef72e35 100644 --- a/notebook/agentchat_nested_sequential_chats.ipynb +++ b/notebook/agentchat_nested_sequential_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_nestedchat.ipynb b/notebook/agentchat_nestedchat.ipynb index f81f2039859..bfb1cc68058 100644 --- a/notebook/agentchat_nestedchat.ipynb +++ b/notebook/agentchat_nestedchat.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_nestedchat_optiguide.ipynb b/notebook/agentchat_nestedchat_optiguide.ipynb index c1648bce62b..cf9f6c5890d 100644 --- a/notebook/agentchat_nestedchat_optiguide.ipynb +++ b/notebook/agentchat_nestedchat_optiguide.ipynb @@ -21,7 +21,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen eventlet gurobipy\n", + "pip install autogen-agentchat~=0.2 eventlet gurobipy\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_oai_assistant_function_call.ipynb b/notebook/agentchat_oai_assistant_function_call.ipynb index bc78819fb19..b96d1eef909 100644 --- a/notebook/agentchat_oai_assistant_function_call.ipynb +++ b/notebook/agentchat_oai_assistant_function_call.ipynb @@ -19,9 +19,9 @@ "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -36,7 +36,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_oai_assistant_groupchat.ipynb b/notebook/agentchat_oai_assistant_groupchat.ipynb index d38fed4cdae..e236aa1120e 100644 --- a/notebook/agentchat_oai_assistant_groupchat.ipynb +++ b/notebook/agentchat_oai_assistant_groupchat.ipynb @@ -16,9 +16,9 @@ "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_oai_code_interpreter.ipynb b/notebook/agentchat_oai_code_interpreter.ipynb index a8aeb614789..92eb3e19e23 100644 --- a/notebook/agentchat_oai_code_interpreter.ipynb +++ b/notebook/agentchat_oai_code_interpreter.ipynb @@ -12,9 +12,9 @@ "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_planning.ipynb b/notebook/agentchat_planning.ipynb index 14b393958dc..9953bf3b57f 100644 --- a/notebook/agentchat_planning.ipynb +++ b/notebook/agentchat_planning.ipynb @@ -26,9 +26,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install pyautogen and docker:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please install autogen-agentchat and docker:\n", "```bash\n", - "pip install pyautogen docker\n", + "pip install autogen-agentchat~=0.2 docker\n", "```" ] }, @@ -45,7 +45,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\" docker" + "# %pip install \"autogen-agentchat~=0.2\" docker" ] }, { diff --git a/notebook/agentchat_society_of_mind.ipynb b/notebook/agentchat_society_of_mind.ipynb index df3a6c54339..091ab2e5519 100644 --- a/notebook/agentchat_society_of_mind.ipynb +++ b/notebook/agentchat_society_of_mind.ipynb @@ -15,9 +15,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -357,8 +357,11 @@ ], "metadata": { "front_matter": { - "tags": ["orchestration", "nested chat"], - "description": "Explore the demonstration of the SocietyOfMindAgent in the AutoGen library, which runs a group chat as an internal monologue, but appears to the external world as a single agent, offering a structured way to manage complex interactions among multiple agents and handle issues such as extracting responses from complex dialogues and dealing with context window constraints." + "description": "Explore the demonstration of the SocietyOfMindAgent in the AutoGen library, which runs a group chat as an internal monologue, but appears to the external world as a single agent, offering a structured way to manage complex interactions among multiple agents and handle issues such as extracting responses from complex dialogues and dealing with context window constraints.", + "tags": [ + "orchestration", + "nested chat" + ] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", diff --git a/notebook/agentchat_stream.ipynb b/notebook/agentchat_stream.ipynb index 8127cdfbab0..5536cef96aa 100644 --- a/notebook/agentchat_stream.ipynb +++ b/notebook/agentchat_stream.ipynb @@ -28,7 +28,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -45,7 +45,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_surfer.ipynb b/notebook/agentchat_surfer.ipynb index 46c4679e301..09c9cb3d4f8 100644 --- a/notebook/agentchat_surfer.ipynb +++ b/notebook/agentchat_surfer.ipynb @@ -15,7 +15,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install AutoGen with the optional `websurfer` dependencies:\n", "```bash\n", - "pip install \"pyautogen[websurfer]\"\n", + "pip install \"autogen-agentchat[websurfer]~=0.2\"\n", "```" ] }, @@ -25,7 +25,7 @@ "metadata": {}, "outputs": [], "source": [ - "# %pip install --quiet \"pyautogen[websurfer]\"" + "# %pip install --quiet \"autogen-agentchat[websurfer]~=0.2\"" ] }, { diff --git a/notebook/agentchat_teachability.ipynb b/notebook/agentchat_teachability.ipynb index ac239f793dc..4be1e135dab 100644 --- a/notebook/agentchat_teachability.ipynb +++ b/notebook/agentchat_teachability.ipynb @@ -22,7 +22,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[teachable]\n", + "pip install autogen-agentchat[teachable]~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", @@ -99,8 +99,8 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[92m\n", - "CLEARING MEMORY\u001B[0m\n" + "\u001b[92m\n", + "CLEARING MEMORY\u001b[0m\n" ] } ], @@ -152,14 +152,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "What is the Vicuna model?\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "The term \"Vicuna model\" does not point to a well-known concept or framework in the realms of science, technology, or social sciences as of my last knowledge update in early 2023. It's possible that the term could be a reference to a proprietary model or a concept that has emerged after my last update or it might be a misspelling or a misunderstanding.\n", "\n", @@ -185,14 +185,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Vicuna is a 13B-parameter language model released by Meta.\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "My apologies for the confusion. As of my last update, the Vicuna model had not been part of my database. If Vicuna is indeed a 13-billion-parameter language model developed by Meta (formerly Facebook Inc.), then it would be one of the large-scale transformer-based models akin to those like GPT-3 by OpenAI.\n", "\n", @@ -222,14 +222,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "What is the Orca model?\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "As of my last update, the Orca model appears to reference a new development that I do not have extensive information on, similar to the earlier reference to the Vicuna model.\n", "\n", @@ -255,14 +255,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Orca is a 13B-parameter language model developed by Microsoft. It outperforms Vicuna on most tasks.\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Thank you for providing the context about the Orca model. Based on the new information you've given, Orca is a language model with 13 billion parameters, similar in size to Meta's Vicuna model, but developed by Microsoft. If it outperforms Vicuna on most tasks, it suggests that it could have been trained on a more diverse dataset, use a more advanced architecture, have more effective training techniques, or some combination of these factors.\n", "\n", @@ -297,14 +297,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "How does the Vicuna model compare to the Orca model?\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "The Vicuna model and the Orca model are both large-scale language models with a significant number of parameters—13 billion, to be exact.\n", "\n", @@ -340,7 +340,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Please summarize this abstract.\n", "\n", @@ -350,9 +350,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "AutoGen is an open-source framework designed to facilitate the creation of applications using large language models (LLMs) through the use of multiple conversational agents. These agents can be tailored to users' needs and are capable of interaction in multiple modes, including with other LLMs, human input, and additional tools. With AutoGen, developers have the flexibility to program agent interactions using both natural language and code, enabling the creation of complex patterns suitable for a wide range of applications. The framework has been proven effective across various fields, such as math, coding, question answering, and entertainment, based on empirical studies conducted to test its capabilities.\n", "\n", @@ -386,7 +386,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Please summarize this abstract. \n", "When I'm summarizing an abstract, I try to make the summary contain just three short bullet points: the title, the innovation, and the key empirical results.\n", @@ -397,9 +397,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "- Title: AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation\n", "- Innovation: AutoGen, an open-source framework that supports building large language model (LLM) applications by enabling conversation among multiple customizable and conversable agents.\n", @@ -436,7 +436,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Please summarize this abstract.\n", "\n", @@ -445,9 +445,9 @@ "Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "- Title: Sparks of Artificial General Intelligence: Early experiments with GPT-4\n", "\n", @@ -487,7 +487,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "9 * 4 + 6 * 6 = 72\n", @@ -496,9 +496,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "To solve this problem, we need to find a way to add exactly 27 (since 99 - 72 = 27) to the left hand side of the equation by modifying only one of the integers in the equation. \n", "\n", @@ -563,7 +563,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "9 * 4 + 6 * 6 = 72\n", @@ -584,9 +584,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Given the new set of instructions and the correction that according to a past memory, the solution is \"9 * 1 + 6 * 9\", let's follow the steps carefully to arrive at the correct modified equation.\n", "\n", @@ -668,7 +668,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "9 * 4 + 6 * 6 = 72\n", @@ -677,9 +677,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Let's apply the steps you've provided to solve the problem at hand:\n", "\n", @@ -740,7 +740,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001B[33muser\u001B[0m (to teachable_agent):\n", + "\u001b[33muser\u001b[0m (to teachable_agent):\n", "\n", "Consider the identity: \n", "8 * 3 + 7 * 9 = 87\n", @@ -749,9 +749,9 @@ "\n", "\n", "--------------------------------------------------------------------------------\n", - "\u001B[31m\n", - ">>>>>>>> USING AUTO REPLY...\u001B[0m\n", - "\u001B[33mteachable_agent\u001B[0m (to user):\n", + "\u001b[31m\n", + ">>>>>>>> USING AUTO REPLY...\u001b[0m\n", + "\u001b[33mteachable_agent\u001b[0m (to user):\n", "\n", "Let's apply the plan step-by-step to find the correct modification:\n", "\n", diff --git a/notebook/agentchat_teachable_oai_assistants.ipynb b/notebook/agentchat_teachable_oai_assistants.ipynb index 3753be414f3..75744d9a397 100644 --- a/notebook/agentchat_teachable_oai_assistants.ipynb +++ b/notebook/agentchat_teachable_oai_assistants.ipynb @@ -28,7 +28,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install the [teachable] option.\n", "```bash\n", - "pip install \"pyautogen[teachable]\"\n", + "pip install \"autogen-agentchat[teachable]~=0.2\"\n", "```" ] }, @@ -39,7 +39,7 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "# %pip install \"pyautogen[teachable]\"" + "# %pip install \"autogen-agentchat[teachable]~=0.2\"" ] }, { diff --git a/notebook/agentchat_teaching.ipynb b/notebook/agentchat_teaching.ipynb index a61f3c7e08e..4d9564276cb 100644 --- a/notebook/agentchat_teaching.ipynb +++ b/notebook/agentchat_teaching.ipynb @@ -16,9 +16,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_transform_messages.ipynb b/notebook/agentchat_transform_messages.ipynb index d0216e05dd2..fe9ca5d1f51 100644 --- a/notebook/agentchat_transform_messages.ipynb +++ b/notebook/agentchat_transform_messages.ipynb @@ -12,9 +12,9 @@ "\n", "````{=mdx}\n", ":::info Requirements\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_two_users.ipynb b/notebook/agentchat_two_users.ipynb index eb9e0c1fbf2..14b95b53701 100644 --- a/notebook/agentchat_two_users.ipynb +++ b/notebook/agentchat_two_users.ipynb @@ -27,7 +27,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```" ] }, @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\"" + "# %pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb index 48812ad01a6..e19e61419ad 100644 --- a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb +++ b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb @@ -23,7 +23,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen openai openai-whisper\n", + "pip install autogen-agentchat~=0.2 openai openai-whisper\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchat_web_info.ipynb b/notebook/agentchat_web_info.ipynb index f990c128b78..e2962e9049f 100644 --- a/notebook/agentchat_web_info.ipynb +++ b/notebook/agentchat_web_info.ipynb @@ -30,9 +30,9 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install pyautogen and docker:\n", + "AutoGen requires `Python>=3.8`. To run this notebook example, please install autogen-agentchat and docker:\n", "```bash\n", - "pip install pyautogen docker\n", + "pip install autogen-agentchat~=0.2 docker\n", "```" ] }, @@ -49,7 +49,7 @@ }, "outputs": [], "source": [ - "# %pip install \"pyautogen>=0.2.3\" docker" + "# %pip install \"autogen-agentchat~=0.2\" docker" ] }, { diff --git a/notebook/agentchat_webscraping_with_apify.ipynb b/notebook/agentchat_webscraping_with_apify.ipynb index 0429c10f8a7..c1fec78d83b 100644 --- a/notebook/agentchat_webscraping_with_apify.ipynb +++ b/notebook/agentchat_webscraping_with_apify.ipynb @@ -23,7 +23,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install -qqq pyautogen apify-client" + "! pip install -qqq autogen-agentchat~=0.2 apify-client" ] }, { diff --git a/notebook/agentchat_websockets.ipynb b/notebook/agentchat_websockets.ipynb index 7e6e449675c..107c7bc40e6 100644 --- a/notebook/agentchat_websockets.ipynb +++ b/notebook/agentchat_websockets.ipynb @@ -28,7 +28,7 @@ "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", - "pip install pyautogen[websockets] fastapi uvicorn\n", + "pip install autogen-agentchat[websockets]~=0.2 fastapi uvicorn\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agentchats_sequential_chats.ipynb b/notebook/agentchats_sequential_chats.ipynb index cffcbfdefcb..3fdd857cd03 100644 --- a/notebook/agentchats_sequential_chats.ipynb +++ b/notebook/agentchats_sequential_chats.ipynb @@ -15,9 +15,9 @@ "\n", "\\:\\:\\:info Requirements\n", "\n", - "Install `pyautogen`:\n", + "Install `autogen-agentchat`:\n", "```bash\n", - "pip install pyautogen\n", + "pip install autogen-agentchat~=0.2\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", diff --git a/notebook/agenteval_cq_math.ipynb b/notebook/agenteval_cq_math.ipynb index 43ea28de1a3..199967a9108 100644 --- a/notebook/agenteval_cq_math.ipynb +++ b/notebook/agenteval_cq_math.ipynb @@ -30,12 +30,12 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install pyautogen, Docker, and OpenAI:\n" + "AutoGen requires `Python>=3.8`. To run this notebook example, please install autogen-agentchat, Docker, and OpenAI:\n" ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" @@ -49,72 +49,9 @@ "id": "68lTZZyJ1_BI", "outputId": "15a55fab-e13a-4654-b8cb-ae117478d6d8" }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: pyautogen>=0.2.3 in /home/vscode/.local/lib/python3.10/site-packages (0.2.17)\n", - "Requirement already satisfied: docker in /home/vscode/.local/lib/python3.10/site-packages (7.0.0)\n", - "Requirement already satisfied: diskcache in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (5.6.3)\n", - "Requirement already satisfied: flaml in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.1.2)\n", - "Requirement already satisfied: tiktoken in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (0.6.0)\n", - "Requirement already satisfied: openai>=1.3 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.14.1)\n", - "Requirement already satisfied: pydantic!=2.6.0,<3,>=1.10 in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.6.4)\n", - "Requirement already satisfied: termcolor in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (2.4.0)\n", - "Requirement already satisfied: python-dotenv in /home/vscode/.local/lib/python3.10/site-packages (from pyautogen>=0.2.3) (1.0.1)\n", - "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.10/site-packages (from docker) (2.31.0)\n", - "Requirement already satisfied: packaging>=14.0 in /usr/local/lib/python3.10/site-packages (from docker) (24.0)\n", - "Requirement already satisfied: urllib3>=1.26.0 in /usr/local/lib/python3.10/site-packages (from docker) (2.2.1)\n", - "Requirement already satisfied: tqdm>4 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.66.2)\n", - "Requirement already satisfied: httpx<1,>=0.23.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (0.27.0)\n", - "Requirement already satisfied: distro<2,>=1.7.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.9.0)\n", - "Requirement already satisfied: sniffio in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (1.3.1)\n", - "Requirement already satisfied: anyio<5,>=3.5.0 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.3.0)\n", - "Requirement already satisfied: typing-extensions<5,>=4.7 in /home/vscode/.local/lib/python3.10/site-packages (from openai>=1.3->pyautogen>=0.2.3) (4.10.0)\n", - "Requirement already satisfied: annotated-types>=0.4.0 in /home/vscode/.local/lib/python3.10/site-packages (from pydantic!=2.6.0,<3,>=1.10->pyautogen>=0.2.3) (0.6.0)\n", - "Requirement already satisfied: pydantic-core==2.16.3 in /home/vscode/.local/lib/python3.10/site-packages (from pydantic!=2.6.0,<3,>=1.10->pyautogen>=0.2.3) (2.16.3)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->docker) (2024.2.2)\n", - "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->docker) (3.6)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/site-packages (from requests>=2.26.0->docker) (3.3.2)\n", - "Requirement already satisfied: NumPy>=1.17 in /home/vscode/.local/lib/python3.10/site-packages (from flaml->pyautogen>=0.2.3) (1.26.4)\n", - "Requirement already satisfied: regex>=2022.1.18 in /home/vscode/.local/lib/python3.10/site-packages (from tiktoken->pyautogen>=0.2.3) (2023.12.25)\n", - "Requirement already satisfied: exceptiongroup>=1.0.2 in /home/vscode/.local/lib/python3.10/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen>=0.2.3) (1.2.0)\n", - "Requirement already satisfied: httpcore==1.* in /home/vscode/.local/lib/python3.10/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (1.0.4)\n", - "Requirement already satisfied: h11<0.15,>=0.13 in /home/vscode/.local/lib/python3.10/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.3->pyautogen>=0.2.3) (0.14.0)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: scipy in /home/vscode/.local/lib/python3.10/site-packages (1.12.0)\n", - "Requirement already satisfied: numpy<1.29.0,>=1.22.4 in /home/vscode/.local/lib/python3.10/site-packages (from scipy) (1.26.4)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n", - "Defaulting to user installation because normal site-packages is not writeable\n", - "Requirement already satisfied: matplotlib in /home/vscode/.local/lib/python3.10/site-packages (3.8.3)\n", - "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/site-packages (from matplotlib) (24.0)\n", - "Requirement already satisfied: pyparsing>=2.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (3.1.2)\n", - "Requirement already satisfied: contourpy>=1.0.1 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (1.2.0)\n", - "Requirement already satisfied: fonttools>=4.22.0 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (4.50.0)\n", - "Requirement already satisfied: python-dateutil>=2.7 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (2.9.0.post0)\n", - "Requirement already satisfied: cycler>=0.10 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (0.12.1)\n", - "Requirement already satisfied: pillow>=8 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (10.2.0)\n", - "Requirement already satisfied: numpy<2,>=1.21 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (1.26.4)\n", - "Requirement already satisfied: kiwisolver>=1.3.1 in /home/vscode/.local/lib/python3.10/site-packages (from matplotlib) (1.4.5)\n", - "Requirement already satisfied: six>=1.5 in /home/vscode/.local/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.0.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n" - ] - } - ], + "outputs": [], "source": [ - "%pip install \"pyautogen>=0.2.3\" docker\n", + "%pip install \"autogen-agentchat~=0.2\" docker\n", "%pip install scipy\n", "%pip install matplotlib" ] diff --git a/notebook/autobuild_agent_library.ipynb b/notebook/autobuild_agent_library.ipynb index bde50355319..88687543603 100644 --- a/notebook/autobuild_agent_library.ipynb +++ b/notebook/autobuild_agent_library.ipynb @@ -24,7 +24,7 @@ "source": [ "## Requirement\n", "\n", - "AutoBuild require `pyautogen[autobuild]`, which can be installed by the following command:" + "AutoBuild require `autogen-agentchat[autobuild]~=0.2`, which can be installed by the following command:" ] }, { @@ -36,7 +36,7 @@ }, "outputs": [], "source": [ - "%pip install pyautogen[autobuild]" + "%pip install autogen-agentchat[autobuild]~=0.2" ] }, { diff --git a/notebook/autobuild_basic.ipynb b/notebook/autobuild_basic.ipynb index d100563ac25..d26633f2bea 100644 --- a/notebook/autobuild_basic.ipynb +++ b/notebook/autobuild_basic.ipynb @@ -26,7 +26,7 @@ "source": [ "## Requirement\n", "\n", - "AutoBuild require `pyautogen[autobuild]`, which can be installed by the following command:" + "AutoBuild require `autogen-agentchat[autobuild]~=0.2`, which can be installed by the following command:" ] }, { @@ -38,7 +38,7 @@ }, "outputs": [], "source": [ - "%pip install pyautogen[autobuild]" + "%pip install autogen-agentchat[autobuild]~=0.2" ] }, { diff --git a/notebook/autogen_uniformed_api_calling.ipynb b/notebook/autogen_uniformed_api_calling.ipynb index 08f747e1722..ef28b99630c 100644 --- a/notebook/autogen_uniformed_api_calling.ipynb +++ b/notebook/autogen_uniformed_api_calling.ipynb @@ -35,7 +35,7 @@ "By default, AutoGen is installed with OpenAI support.\n", " \n", "```bash\n", - "pip install pyautogen[gemini,anthropic,mistral,together]\n", + "pip install autogen-agentchat[gemini,anthropic,mistral,together]~=0.2\n", "```\n", "\n", "\n", diff --git a/notebook/contributing.md b/notebook/contributing.md index fcafe3c7115..e21ef639267 100644 --- a/notebook/contributing.md +++ b/notebook/contributing.md @@ -36,9 +36,9 @@ You don't need to explain in depth how to install AutoGen. Unless there are spec `````` ````{=mdx} :::info Requirements -Install `pyautogen`: +Install `autogen-agentchat`: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` For more information, please refer to the [installation guide](/docs/installation/). @@ -54,7 +54,7 @@ Or if extras are needed: Some extra dependencies are needed for this notebook, which can be installed via pip: ```bash -pip install pyautogen[retrievechat] flaml[automl] +pip install autogen-agentchat[retrievechat]~=0.2 flaml[automl] ``` For more information, please refer to the [installation guide](/docs/installation/). diff --git a/notebook/gpt_assistant_agent_function_call.ipynb b/notebook/gpt_assistant_agent_function_call.ipynb index 6febb89cc9b..db14b262503 100644 --- a/notebook/gpt_assistant_agent_function_call.ipynb +++ b/notebook/gpt_assistant_agent_function_call.ipynb @@ -22,56 +22,18 @@ }, "source": [ "## Requirements\n", - "AutoGen requires Python 3.8 or newer. For this notebook, please install `pyautogen`:" + "AutoGen requires Python 3.8 or newer. For this notebook, please Install `autogen-agentchat`:" ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { "id": "pWFw6-8lMleD" }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: pyautogen in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (0.2.8)\n", - "Requirement already satisfied: openai>=1.3 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (1.6.1)\n", - "Requirement already satisfied: diskcache in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (5.6.3)\n", - "Requirement already satisfied: termcolor in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.4.0)\n", - "Requirement already satisfied: flaml in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.1.1)\n", - "Requirement already satisfied: python-dotenv in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (1.0.0)\n", - "Requirement already satisfied: tiktoken in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (0.5.2)\n", - "Requirement already satisfied: pydantic<3,>=1.10 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (2.5.3)\n", - "Requirement already satisfied: docker in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pyautogen) (7.0.0)\n", - "Requirement already satisfied: anyio<5,>=3.5.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.2.0)\n", - "Requirement already satisfied: distro<2,>=1.7.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (1.8.0)\n", - "Requirement already satisfied: httpx<1,>=0.23.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (0.26.0)\n", - "Requirement already satisfied: sniffio in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (1.3.0)\n", - "Requirement already satisfied: tqdm>4 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.66.1)\n", - "Requirement already satisfied: typing-extensions<5,>=4.7 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from openai>=1.3->pyautogen) (4.9.0)\n", - "Requirement already satisfied: annotated-types>=0.4.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pydantic<3,>=1.10->pyautogen) (0.6.0)\n", - "Requirement already satisfied: pydantic-core==2.14.6 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from pydantic<3,>=1.10->pyautogen) (2.14.6)\n", - "Requirement already satisfied: packaging>=14.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (23.2)\n", - "Requirement already satisfied: requests>=2.26.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (2.31.0)\n", - "Requirement already satisfied: urllib3>=1.26.0 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from docker->pyautogen) (2.1.0)\n", - "Requirement already satisfied: NumPy>=1.17.0rc1 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from flaml->pyautogen) (1.26.2)\n", - "Requirement already satisfied: regex>=2022.1.18 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from tiktoken->pyautogen) (2023.10.3)\n", - "Requirement already satisfied: idna>=2.8 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from anyio<5,>=3.5.0->openai>=1.3->pyautogen) (3.6)\n", - "Requirement already satisfied: certifi in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen) (2023.11.17)\n", - "Requirement already satisfied: httpcore==1.* in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai>=1.3->pyautogen) (1.0.2)\n", - "Requirement already satisfied: h11<0.15,>=0.13 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai>=1.3->pyautogen) (0.14.0)\n", - "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/justintrugman/.pyenv/versions/3.11.7/lib/python3.11/site-packages (from requests>=2.26.0->docker->pyautogen) (3.3.2)\n", - "\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.3.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n", - "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", - "Note: you may need to restart the kernel to use updated packages.\n" - ] - } - ], + "outputs": [], "source": [ - "pip install pyautogen" + "pip install autogen-agentchat~=0.2" ] }, { diff --git a/notebook/oai_chatgpt_gpt4.ipynb b/notebook/oai_chatgpt_gpt4.ipynb index 280b7145e93..1994d146c06 100644 --- a/notebook/oai_chatgpt_gpt4.ipynb +++ b/notebook/oai_chatgpt_gpt4.ipynb @@ -34,7 +34,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install with the [blendsearch] option:\n", "```bash\n", - "pip install \"pyautogen[blendsearch]\"\n", + "pip install \"pyautogen[blendsearch]<0.2\"\n", "```" ] }, diff --git a/notebook/oai_completion.ipynb b/notebook/oai_completion.ipynb index ac1b3f9c95f..451a161bbad 100644 --- a/notebook/oai_completion.ipynb +++ b/notebook/oai_completion.ipynb @@ -32,7 +32,7 @@ "\n", "AutoGen requires `Python>=3.8`. To run this notebook example, please install with the [blendsearch] option:\n", "```bash\n", - "pip install pyautogen[blendsearch]\n", + "pip install pyautogen[blendsearch]~=0.1\n", "```" ] }, diff --git a/samples/apps/auto-anny/requirements.txt b/samples/apps/auto-anny/requirements.txt index 13a0ba19c64..e1b27e81bc2 100644 --- a/samples/apps/auto-anny/requirements.txt +++ b/samples/apps/auto-anny/requirements.txt @@ -1,2 +1,2 @@ discord.py -pyautogen +autogen-agentchat~=0.2 diff --git a/samples/apps/promptflow-autogen/requirements.txt b/samples/apps/promptflow-autogen/requirements.txt index 6fe9807785f..c4b6e9681fd 100644 --- a/samples/apps/promptflow-autogen/requirements.txt +++ b/samples/apps/promptflow-autogen/requirements.txt @@ -1,7 +1,5 @@ promptflow==1.8.0 -pyautogen==0.2.23 -pyautogen[graph] -pyautogen[redis] +autogen-agentchat[graph,redis]~=0.2 redis semantic-kernel beautifulsoup4 diff --git a/samples/tools/autogenbench/README.md b/samples/tools/autogenbench/README.md index 9c747c9896d..85ee3ace9d0 100644 --- a/samples/tools/autogenbench/README.md +++ b/samples/tools/autogenbench/README.md @@ -162,7 +162,7 @@ This folder holds the results for the ``two_agent_stocks`` task of the ``default Within each folder, you will find the following files: -- *timestamp.txt*: records the date and time of the run, along with the version of the pyautogen library installed +- *timestamp.txt*: records the date and time of the run, along with the version of the autogen-agentchat library installed - *console_log.txt*: all console output produced by Docker when running AutoGen. Read this like you would a regular console. - *[agent]_messages.json*: for each Agent, a log of their messages dictionaries - *./coding*: A directory containing all code written by AutoGen, and all artifacts produced by that code. diff --git a/samples/tools/autogenbench/autogenbench/res/Dockerfile b/samples/tools/autogenbench/autogenbench/res/Dockerfile index 5c3f5f40968..0382a00fbb1 100644 --- a/samples/tools/autogenbench/autogenbench/res/Dockerfile +++ b/samples/tools/autogenbench/autogenbench/res/Dockerfile @@ -9,8 +9,8 @@ RUN pip install --upgrade pip RUN ln -snf /usr/share/zoneinfo/US/Pacific /etc/localtime && echo "US/Pacific" > /etc/timezone # Pre-load autogen dependencies, but not autogen itself since we'll often want to install the latest from source -RUN pip install pyautogen[teachable,lmm,graphs,websurfer] -RUN pip uninstall --yes pyautogen +RUN pip install autogen-agentchat[teachable,lmm,graphs,websurfer]~=0.2 +RUN pip uninstall --yes autogen-agentchat~=0.2 # Pre-load popular packages as per https://learnpython.com/blog/most-popular-python-packages/ RUN pip install numpy pandas matplotlib seaborn scikit-learn requests urllib3 nltk pillow pytest diff --git a/samples/tools/autogenbench/autogenbench/template/testbed_utils.py b/samples/tools/autogenbench/autogenbench/template/testbed_utils.py index bce42a625b2..37b1f69979b 100644 --- a/samples/tools/autogenbench/autogenbench/template/testbed_utils.py +++ b/samples/tools/autogenbench/autogenbench/template/testbed_utils.py @@ -62,7 +62,7 @@ def init(): # Print some information about the run with open("timestamp.txt", "wt") as f: f.write("Timestamp: " + datetime.now().isoformat() + "\n") - f.write("pyautogen version: " + str(autogen.__version__) + "\n") + f.write("autogen-agentchat version: " + str(autogen.__version__) + "\n") # Start logging if AUTOGEN_VERSION < packaging.version.parse("0.2.0b1"): diff --git a/samples/tools/autogenbench/pyproject.toml b/samples/tools/autogenbench/pyproject.toml index ef1a2fe80df..7c730c7c113 100644 --- a/samples/tools/autogenbench/pyproject.toml +++ b/samples/tools/autogenbench/pyproject.toml @@ -18,7 +18,7 @@ classifiers = [ ] dependencies = [ - "pyautogen", + "autogen-agentchat~=0.2", "docker", "huggingface_hub", "tabulate", diff --git a/setup.py b/setup.py index 65f5cbe8f02..fe55a4a6c2e 100644 --- a/setup.py +++ b/setup.py @@ -110,10 +110,10 @@ } setuptools.setup( - name="pyautogen", + name="autogen-agentchat", version=__version__, author="AutoGen", - author_email="autogen-contact@service.microsoft.com", + author_email="autogen@microsoft.com", description="Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework", long_description=long_description, long_description_content_type="text/markdown", diff --git a/website/blog/2023-10-18-RetrieveChat/index.mdx b/website/blog/2023-10-18-RetrieveChat/index.mdx index d5c78148e44..4bad582eb5d 100644 --- a/website/blog/2023-10-18-RetrieveChat/index.mdx +++ b/website/blog/2023-10-18-RetrieveChat/index.mdx @@ -52,9 +52,9 @@ The conversation terminates if no more documents are available for the context. ## Basic Usage of RAG Agents 0. Install dependencies -Please install pyautogen with the [retrievechat] option before using RAG agents. +Please install autogen-agentchat with the [retrievechat] option before using RAG agents. ```bash -pip install "pyautogen[retrievechat]" +pip install "autogen-agentchat[retrievechat]~=0.2" ``` *You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).* diff --git a/website/blog/2023-10-26-TeachableAgent/index.mdx b/website/blog/2023-10-26-TeachableAgent/index.mdx index ca399248954..f097acc7372 100644 --- a/website/blog/2023-10-26-TeachableAgent/index.mdx +++ b/website/blog/2023-10-26-TeachableAgent/index.mdx @@ -36,10 +36,10 @@ AutoGen contains four code examples that use `Teachability`. 1. Install dependencies -Please install pyautogen with the [teachable] option before using `Teachability`. +Please install autogen-agentchat~=0.2 with the [teachable] option before using `Teachability`. ```bash -pip install "pyautogen[teachable]" +pip install "autogen-agentchat[teachable]~=0.2" ``` 2. Import agents diff --git a/website/blog/2023-11-06-LMM-Agent/index.mdx b/website/blog/2023-11-06-LMM-Agent/index.mdx index 452079f1c45..0ab92c4dbb9 100644 --- a/website/blog/2023-11-06-LMM-Agent/index.mdx +++ b/website/blog/2023-11-06-LMM-Agent/index.mdx @@ -25,7 +25,7 @@ GPT-4V represents the forefront in image comprehension, while LLaVA is an effici Incorporate the `lmm` feature during AutoGen installation: ```bash -pip install "pyautogen[lmm]" +pip install "autogen-agentchat[lmm]~=0.2" ``` Subsequently, import the **Multimodal Conversable Agent** or **LLaVA Agent** from AutoGen: diff --git a/website/blog/2023-11-13-OAI-assistants/index.mdx b/website/blog/2023-11-13-OAI-assistants/index.mdx index 07216a25969..2fc9bee359b 100644 --- a/website/blog/2023-11-13-OAI-assistants/index.mdx +++ b/website/blog/2023-11-13-OAI-assistants/index.mdx @@ -30,7 +30,7 @@ This integration shows great potential and synergy, and we plan to continue enha ## Installation ```bash -pip install pyautogen==0.2.0b5 +pip install autogen-agentchat~=0.2 ``` ## Basic Example diff --git a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx index be71662ab6e..a6d0025699d 100644 --- a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx +++ b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx @@ -29,7 +29,7 @@ up an endpoint server automatically without any user participation. ## Installation - AutoGen: ```bash -pip install pyautogen[autobuild] +pip install autogen-agentchat[autobuild]~=0.2 ``` - (Optional: if you want to use open-source LLMs) vLLM and FastChat ```bash diff --git a/website/blog/2024-03-03-AutoGen-Update/index.mdx b/website/blog/2024-03-03-AutoGen-Update/index.mdx index da036094c6e..7458ec74e05 100644 --- a/website/blog/2024-03-03-AutoGen-Update/index.mdx +++ b/website/blog/2024-03-03-AutoGen-Update/index.mdx @@ -148,7 +148,7 @@ These tools have been used for improving the AutoGen library as well as applicat We are making rapid progress in further improving the interface to make it even easier to build agent applications. For example: - [AutoBuild](/blog/2023/11/26/Agent-AutoBuild). AutoBuild is an ongoing area of research to automatically create or select a group of agents for a given task and objective. If successful, it will greatly reduce the effort from users or developers when using the multi-agent technology. It also paves the way for agentic decomposition to handle complex tasks. It is available as an experimental feature and demonstrated in two modes: free-form [creation](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) and [selection](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) from a library. -- [AutoGen Studio](/blog/2023/12/01/AutoGenStudio). AutoGen Studio is a no-code UI for fast experimentation with the multi-agent conversations. It lowers the barrier of entrance to the AutoGen technology. Models, agents, and workflows can all be configured without writing code. And chatting with multiple agents in a playground is immediately available after the configuration. Although only a subset of `pyautogen` features are available in this sample app, it demonstrates a promising experience. It has generated tremendous excitement in the community. +- [AutoGen Studio](/blog/2023/12/01/AutoGenStudio). AutoGen Studio is a no-code UI for fast experimentation with the multi-agent conversations. It lowers the barrier of entrance to the AutoGen technology. Models, agents, and workflows can all be configured without writing code. And chatting with multiple agents in a playground is immediately available after the configuration. Although only a subset of `autogen-agentchat` features are available in this sample app, it demonstrates a promising experience. It has generated tremendous excitement in the community. - Conversation Programming+. The [AutoGen paper](https://arxiv.org/abs/2308.08155) introduced a key concept of _Conversation Programming_, which can be used to program diverse conversation patterns such as 1-1 chat, group chat, hierarchical chat, nested chat etc. While we offered dynamic group chat as an example of high-level orchestration, it made other patterns relatively less discoverable. Therefore, we have added more convenient conversation programming features which enables easier definition of other types of complex workflow, such as [finite state machine based group chat](/blog/2024/02/11/FSM-GroupChat), [sequential chats](/docs/notebooks/agentchats_sequential_chats), and [nested chats](/docs/notebooks/agentchat_nestedchat). Many users have found them useful in implementing specific patterns, which have been always possible but more obvious with the added features. I will write another blog post for a deep dive. ### Learning/Optimization/Teaching diff --git a/website/blog/2024-06-24-AltModels-Classes/index.mdx b/website/blog/2024-06-24-AltModels-Classes/index.mdx index 1f01fb9402a..7001e74e83f 100644 --- a/website/blog/2024-06-24-AltModels-Classes/index.mdx +++ b/website/blog/2024-06-24-AltModels-Classes/index.mdx @@ -72,10 +72,10 @@ Now it's time to try them out. Install the appropriate client based on the model you wish to use. ```sh -pip install pyautogen["mistral"] # for Mistral AI client -pip install pyautogen["anthropic"] # for Anthropic client -pip install pyautogen["together"] # for Together.AI client -pip install pyautogen["groq"] # for Groq client +pip install autogen-agentchat["mistral"]~=0.2 # for Mistral AI client +pip install autogen-agentchat["anthropic"]~=0.2 # for Anthropic client +pip install autogen-agentchat["together"]~=0.2 # for Together.AI client +pip install autogen-agentchat["groq"]~=0.2 # for Groq client ``` ### Configuration Setup diff --git a/website/docs/FAQ.mdx b/website/docs/FAQ.mdx index a367a9b2063..14be83c7bab 100644 --- a/website/docs/FAQ.mdx +++ b/website/docs/FAQ.mdx @@ -4,12 +4,12 @@ import TOCInline from "@theme/TOCInline"; -## Install the correct package - `pyautogen` +## Install the correct package - `autogen-agentchat` -The name of Autogen package at PyPI is `pyautogen`: +The name of Autogen package at PyPI is `autogen-agentchat`: ``` -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` Typical errors that you might face when using the wrong package are `AttributeError: module 'autogen' has no attribute 'Agent'`, `AttributeError: module 'autogen' has no attribute 'config_list_from_json'` etc. diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx index 3e162a09832..3d8639d11fb 100644 --- a/website/docs/Getting-Started.mdx +++ b/website/docs/Getting-Started.mdx @@ -35,7 +35,7 @@ Microsoft, Penn State University, and University of Washington. ### Quickstart ```sh -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` diff --git a/website/docs/ecosystem/portkey.md b/website/docs/ecosystem/portkey.md index 4825cf78d9a..a9f67d3871f 100644 --- a/website/docs/ecosystem/portkey.md +++ b/website/docs/ecosystem/portkey.md @@ -13,7 +13,7 @@ Portkey adds 4 core production capabilities to any AutoGen agent: 1. **Install Required Packages:** 2. ```bash - pip install -qU pyautogen portkey-ai + pip install -qU autogen-agentchat~=0.2 portkey-ai ``` **Configure AutoGen with Portkey:** diff --git a/website/docs/installation/Installation.mdx b/website/docs/installation/Installation.mdx index af3ed662013..8c41f09cfde 100644 --- a/website/docs/installation/Installation.mdx +++ b/website/docs/installation/Installation.mdx @@ -13,8 +13,8 @@ When installing AutoGen locally, we recommend using a virtual environment for th Create and activate: ```bash - python3 -m venv pyautogen - source pyautogen/bin/activate + python3 -m venv .venv + source .venv/bin/activate ``` To deactivate later, run: @@ -32,8 +32,8 @@ When installing AutoGen locally, we recommend using a virtual environment for th Create and activate: ```bash - conda create -n pyautogen python=3.10 - conda activate pyautogen + conda create -n autogen python=3.10 + conda activate autogen ``` To deactivate later, run: @@ -52,7 +52,7 @@ When installing AutoGen locally, we recommend using a virtual environment for th poetry init poetry shell - poetry add pyautogen + poetry add autogen-agentchat~=0.2 ``` To deactivate later, run: @@ -69,15 +69,9 @@ When installing AutoGen locally, we recommend using a virtual environment for th AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` -:::info - -`pyautogen<0.2` required `openai<1`. Starting from pyautogen v0.2, `openai>=1` is required. - -::: - ## Install Docker for Code Execution We recommend using Docker for code execution. diff --git a/website/docs/installation/Optional-Dependencies.md b/website/docs/installation/Optional-Dependencies.md index 7d17ce50e37..3f8164a667e 100644 --- a/website/docs/installation/Optional-Dependencies.md +++ b/website/docs/installation/Optional-Dependencies.md @@ -6,7 +6,7 @@ To use LLM caching with Redis, you need to install the Python package with the option `redis`: ```bash -pip install "pyautogen[redis]" +pip install "autogen-agentchat[redis]~=0.2" ``` See [LLM Caching](/docs/topics/llm-caching) for details. @@ -17,7 +17,7 @@ To use the IPython code executor, you need to install the `jupyter-client` and `ipykernel` packages: ```bash -pip install "pyautogen[ipython]" +pip install "autogen-agentchat[ipython]~=0.2" ``` To use the IPython code executor: @@ -44,21 +44,21 @@ Example notebooks: ## retrievechat -`pyautogen` supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB. +AutoGen 0.2 supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB. ```bash -pip install "pyautogen[retrievechat]" +pip install "autogen-agentchat[retrievechat]" ``` *You'll need to install `chromadb<=0.5.0` if you see issue like [#3551](https://github.com/microsoft/autogen/issues/3551).* -Alternatively `pyautogen` also supports PGVector and Qdrant which can be installed in place of ChromaDB, or alongside it. +Alternatively AutoGen 0.2 also supports PGVector and Qdrant which can be installed in place of ChromaDB, or alongside it. ```bash -pip install "pyautogen[retrievechat-pgvector]" +pip install "autogen-agentchat[retrievechat-pgvector]~=0.2" ``` ```bash -pip install "pyautogen[retrievechat-qdrant]" +pip install "autogen-agentchat[retrievechat-qdrant]~=0.2" ``` RetrieveChat can handle various types of documents. By default, it can process @@ -83,7 +83,7 @@ Example notebooks: To use Teachability, please install AutoGen with the [teachable] option. ```bash -pip install "pyautogen[teachable]" +pip install "autogen-agentchat[teachable]~=0.2" ``` Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) @@ -93,7 +93,7 @@ Example notebook: [Chatting with a teachable agent](https://github.com/microsoft We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it. ```bash -pip install "pyautogen[lmm]" +pip install "autogen-agentchat[lmm]~=0.2" ``` Example notebooks: @@ -117,7 +117,7 @@ Example notebooks: To use a graph in `GroupChat`, particularly for graph visualization, please install AutoGen with the [graph] option. ```bash -pip install "pyautogen[graph]" +pip install "autogen-agentchat[graph]~=0.2" ``` Example notebook: [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) @@ -127,5 +127,5 @@ Example notebook: [Finite State Machine graphs to set speaker transition constra AutoGen includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AutoGen with the `[long-context]` option: ```bash -pip install "pyautogen[long-context]" +pip install "autogen-agentchat[long-context]~=0.2" ``` diff --git a/website/docs/topics/code-execution/custom-executor.ipynb b/website/docs/topics/code-execution/custom-executor.ipynb index c6ee4c16018..41d3b59b8fd 100644 --- a/website/docs/topics/code-execution/custom-executor.ipynb +++ b/website/docs/topics/code-execution/custom-executor.ipynb @@ -18,7 +18,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip -qqq install pyautogen matplotlib yfinance" + "! pip -qqq install autogen-agentchat~=0.2 matplotlib yfinance" ] }, { diff --git a/website/docs/topics/code-execution/jupyter-code-executor.ipynb b/website/docs/topics/code-execution/jupyter-code-executor.ipynb index 09f35f1fdc9..9ee72bccc0a 100644 --- a/website/docs/topics/code-execution/jupyter-code-executor.ipynb +++ b/website/docs/topics/code-execution/jupyter-code-executor.ipynb @@ -15,7 +15,7 @@ "In order to use Jupyter based code execution some extra dependencies are required. These can be installed with the extra `jupyter-executor`:\n", "\n", "```bash\n", - "pip install 'pyautogen[jupyter-executor]'\n", + "pip install 'autogen-agentchat[jupyter-executor]~=0.2'\n", "```\n", "\n", "## Jupyter Server\n", diff --git a/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md b/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md index e251786f555..965bbfbd010 100644 --- a/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md +++ b/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md @@ -5,10 +5,10 @@ Text compression is crucial for optimizing interactions with LLMs, especially wh This guide introduces LLMLingua's integration with AutoGen, demonstrating how to use this tool to compress text, thereby optimizing the usage of LLMs for various applications. :::info Requirements -Install `pyautogen[long-context]` and `PyMuPDF`: +Install `autogen-agentchat[long-context]~=0.2` and `PyMuPDF`: ```bash -pip install "pyautogen[long-context]" PyMuPDF +pip install "autogen-agentchat[long-context]~=0.2" PyMuPDF ``` For more information, please refer to the [installation guide](/docs/installation/). diff --git a/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md b/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md index 52fea15d01e..fc854f8d834 100644 --- a/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md +++ b/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md @@ -13,10 +13,10 @@ Why do we need to handle long contexts? The problem arises from several constrai The `TransformMessages` capability is designed to modify incoming messages before they are processed by the LLM agent. This can include limiting the number of messages, truncating messages to meet token limits, and more. :::info Requirements -Install `pyautogen`: +Install `autogen-agentchat`: ```bash -pip install pyautogen +pip install autogen-agentchat~=0.2 ``` For more information, please refer to the [installation guide](/docs/installation/). diff --git a/website/docs/topics/non-openai-models/cloud-anthropic.ipynb b/website/docs/topics/non-openai-models/cloud-anthropic.ipynb index a6c87b6a5ca..ba4c831232c 100644 --- a/website/docs/topics/non-openai-models/cloud-anthropic.ipynb +++ b/website/docs/topics/non-openai-models/cloud-anthropic.ipynb @@ -21,7 +21,7 @@ "Additionally, this client class provides support for function/tool calling and will track token usage and cost correctly as per Anthropic's API costs (as of June 2024).\n", "\n", "## Requirements\n", - "To use Anthropic Claude with AutoGen, first you need to install the `pyautogen[anthropic]` package.\n", + "To use Anthropic Claude with AutoGen, first you need to install the `autogen-agentchat[anthropic]` package.\n", "\n", "To try out the function call feature of Claude model, you need to install `anthropic>=0.23.1`.\n" ] @@ -32,7 +32,7 @@ "metadata": {}, "outputs": [], "source": [ - "!pip install pyautogen[\"anthropic\"]" + "!pip install autogen-agentchat[\"anthropic\"]~=0.2" ] }, { diff --git a/website/docs/topics/non-openai-models/cloud-bedrock.ipynb b/website/docs/topics/non-openai-models/cloud-bedrock.ipynb index 71c1e2e7ffe..422598dd2fe 100644 --- a/website/docs/topics/non-openai-models/cloud-bedrock.ipynb +++ b/website/docs/topics/non-openai-models/cloud-bedrock.ipynb @@ -25,7 +25,7 @@ "It does not, yet, support image generation ([contribute](https://microsoft.github.io/autogen/docs/contributor-guide/contributing/)).\n", "\n", "## Requirements\n", - "To use Amazon Bedrock with AutoGen, first you need to install the `pyautogen[bedrock]` package.\n", + "To use Amazon Bedrock with AutoGen, first you need to install the `autogen-agentchat[bedrock]` package.\n", "\n", "## Pricing\n", "\n", @@ -48,7 +48,7 @@ "outputs": [], "source": [ "# If you need to install AutoGen with Amazon Bedrock\n", - "!pip install pyautogen[\"bedrock\"]" + "!pip install autogen-agentchat[\"bedrock\"]~=0.2" ] }, { diff --git a/website/docs/topics/non-openai-models/cloud-cerebras.ipynb b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb index a8e1d3940f4..e0b9bbaf2d5 100644 --- a/website/docs/topics/non-openai-models/cloud-cerebras.ipynb +++ b/website/docs/topics/non-openai-models/cloud-cerebras.ipynb @@ -18,7 +18,7 @@ "metadata": {}, "source": [ "# Requirements\n", - "To use Cerebras with AutoGen, install the `pyautogen[cerebras]` package." + "To use Cerebras with AutoGen, install the `autogen-agentchat[cerebras]` package." ] }, { @@ -27,7 +27,7 @@ "metadata": {}, "outputs": [], "source": [ - "!pip install pyautogen[\"cerebras\"]" + "!pip install autogen-agentchat[\"cerebras\"]~=0.2" ] }, { diff --git a/website/docs/topics/non-openai-models/cloud-cohere.ipynb b/website/docs/topics/non-openai-models/cloud-cohere.ipynb index 73dcc54a75e..defddf983c3 100644 --- a/website/docs/topics/non-openai-models/cloud-cohere.ipynb +++ b/website/docs/topics/non-openai-models/cloud-cohere.ipynb @@ -25,10 +25,10 @@ "\n", "## Getting started\n", "\n", - "First you need to install the `pyautogen` package to use AutoGen with the Cohere API library.\n", + "First you need to install the `autogen-agentchat~=0.2` package to use AutoGen with the Cohere API library.\n", "\n", "``` bash\n", - "pip install pyautogen[cohere]\n", + "pip install autogen-agentchat[cohere]~=0.2\n", "```" ] }, diff --git a/website/docs/topics/non-openai-models/cloud-gemini.ipynb b/website/docs/topics/non-openai-models/cloud-gemini.ipynb index a227582c592..0a36dd62cf9 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini.ipynb @@ -11,7 +11,7 @@ "Install AutoGen with Gemini features:\n", "\n", "```bash\n", - "pip install pyautogen[gemini]\n", + "pip install autogen-agentchat[gemini]~=0.2\n", "```\n", "\n", "## Dependencies of This Notebook\n", @@ -19,7 +19,7 @@ "In this notebook, we will explore how to use Gemini in AutoGen alongside other tools. Install the necessary dependencies with the following command:\n", "\n", "```bash\n", - "pip install pyautogen[gemini,retrievechat,lmm]\n", + "pip install autogen-agentchat[gemini,retrievechat,lmm]~=0.2\n", "```\n", "\n", "## Features\n", diff --git a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb index 637d340dc37..3456a803f48 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb @@ -16,7 +16,7 @@ "\n", "Install AutoGen with Gemini features:\n", "```bash\n", - "pip install pyautogen[gemini]\n", + "pip install autogen-agentchat[gemini]~=0.2\n", "```\n", "\n", "### Install other Dependencies of this Notebook\n", diff --git a/website/docs/topics/non-openai-models/cloud-mistralai.ipynb b/website/docs/topics/non-openai-models/cloud-mistralai.ipynb index 1228f96db4e..9babddf601f 100644 --- a/website/docs/topics/non-openai-models/cloud-mistralai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-mistralai.ipynb @@ -25,10 +25,10 @@ "\n", "## Getting started\n", "\n", - "First you need to install the `pyautogen` package to use AutoGen with the Mistral API library.\n", + "First you need to install the `autogen-agentchat~=0.2` package to use AutoGen with the Mistral API library.\n", "\n", "``` bash\n", - "pip install pyautogen[mistral]\n", + "pip install autogen-agentchat[mistral]~=0.2\n", "```" ] }, diff --git a/website/docs/topics/non-openai-models/cloud-togetherai.ipynb b/website/docs/topics/non-openai-models/cloud-togetherai.ipynb index eccc372ce2e..6ec9f52bd7f 100644 --- a/website/docs/topics/non-openai-models/cloud-togetherai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-togetherai.ipynb @@ -23,10 +23,10 @@ "\n", "## Getting started\n", "\n", - "First, you need to install the `pyautogen` package to use AutoGen with the Together.AI API library.\n", + "First, you need to install the `autogen-agentchat~=0.2` package to use AutoGen with the Together.AI API library.\n", "\n", "``` bash\n", - "pip install pyautogen[together]\n", + "pip install autogen-agentchat[together]~=0.2\n", "```" ] }, diff --git a/website/docs/topics/prompting-and-reasoning/react.ipynb b/website/docs/topics/prompting-and-reasoning/react.ipynb index 08f30913348..7663ebc156f 100644 --- a/website/docs/topics/prompting-and-reasoning/react.ipynb +++ b/website/docs/topics/prompting-and-reasoning/react.ipynb @@ -26,7 +26,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install \"pyautogen>=0.2.18\" \"tavily-python\"" + "! pip install \"autogen-agentchat~=0.2\" \"tavily-python\"" ] }, { diff --git a/website/docs/topics/prompting-and-reasoning/reflection.ipynb b/website/docs/topics/prompting-and-reasoning/reflection.ipynb index 60438904472..bb6fa50b289 100644 --- a/website/docs/topics/prompting-and-reasoning/reflection.ipynb +++ b/website/docs/topics/prompting-and-reasoning/reflection.ipynb @@ -18,7 +18,7 @@ "id": "5cff1938", "metadata": {}, "source": [ - "First make sure the `pyautogen` package is installed." + "First make sure the `autogen-agentchat` package is installed." ] }, { @@ -28,7 +28,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install \"pyautogen>=0.2.18\"" + "! pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/website/docs/topics/task_decomposition.ipynb b/website/docs/topics/task_decomposition.ipynb index e4c24c9004e..ffb18ebdee0 100644 --- a/website/docs/topics/task_decomposition.ipynb +++ b/website/docs/topics/task_decomposition.ipynb @@ -16,7 +16,7 @@ "id": "a6c436c9", "metadata": {}, "source": [ - "First make sure the `pyautogen` package is installed." + "First make sure the `autogen-agentchat` package is installed." ] }, { @@ -26,7 +26,7 @@ "metadata": {}, "outputs": [], "source": [ - "! pip install \"pyautogen>=0.2.18\"" + "! pip install \"autogen-agentchat~=0.2\"" ] }, { diff --git a/website/docs/tutorial/introduction.ipynb b/website/docs/tutorial/introduction.ipynb index 88df66b7270..fd5a362d035 100644 --- a/website/docs/tutorial/introduction.ipynb +++ b/website/docs/tutorial/introduction.ipynb @@ -38,7 +38,7 @@ "source": [ "## Installation\n", "\n", - "The simplest way to install AutoGen is from pip: `pip install pyautogen`. Find more options in [Installation](/docs/installation/)." + "The simplest way to install AutoGen is from pip: `pip install autogen-agentchat~=0.2`. Find more options in [Installation](/docs/installation/)." ] }, {