Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,20 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
the inner messages as they are created, and the :class:`~autogen_agentchat.base.Response`
object as the last item before closing the generator.

The :meth:`BaseChatAgent.run` method returns a :class:`~autogen_agentchat.base.TaskResult`
containing the messages produced by the agent. In the list of messages,
:attr:`~autogen_agentchat.base.TaskResult.messages`,
the last message is the final response message.

The :meth:`BaseChatAgent.run_stream` method creates an async generator that produces
the inner messages as they are created, and the :class:`~autogen_agentchat.base.TaskResult`
object as the last item before closing the generator.

.. attention::

The caller must only pass the new messages to the agent on each call
to the :meth:`on_messages` or :meth:`on_messages_stream` method.
to the :meth:`on_messages`, :meth:`on_messages_stream`, :meth:`BaseChatAgent.run`,
or :meth:`BaseChatAgent.run_stream` methods.
The agent maintains its state between calls to these methods.
Do not pass the entire conversation history to the agent on each call.

Expand Down Expand Up @@ -215,10 +225,8 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
.. code-block:: python

import asyncio
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage


async def main() -> None:
Expand All @@ -228,10 +236,8 @@ async def main() -> None:
)
agent = AssistantAgent(name="assistant", model_client=model_client)

response = await agent.on_messages(
[TextMessage(content="What is the capital of France?", source="user")], CancellationToken()
)
print(response)
result = await agent.run(task="Name two cities in North America.")
print(result)


asyncio.run(main())
Expand All @@ -246,8 +252,6 @@ async def main() -> None:
import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken


async def main() -> None:
Expand All @@ -261,9 +265,7 @@ async def main() -> None:
model_client_stream=True,
)

stream = agent.on_messages_stream(
[TextMessage(content="Name two cities in North America.", source="user")], CancellationToken()
)
stream = agent.run_stream(task="Name two cities in North America.")
async for message in stream:
print(message)

Expand All @@ -272,27 +274,23 @@ async def main() -> None:

.. code-block:: text

source='assistant' models_usage=None content='Two' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' cities' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' North' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' America' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' are' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' New' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' York' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' City' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' the' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' United' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' States' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' and' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' Toronto' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' in' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' Canada' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='.' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' TERMIN' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='ATE' type='ModelClientStreamingChunkEvent'
Response(chat_message=TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content='Two cities in North America are New York City in the United States and Toronto in Canada. TERMINATE', type='TextMessage'), inner_messages=[])
source='user' models_usage=None metadata={} content='Name two cities in North America.' type='TextMessage'
source='assistant' models_usage=None metadata={} content='Two' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' cities' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' in' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' North' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' America' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' are' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' New' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' York' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' City' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' and' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' Toronto' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content='.' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content=' TERMIN' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None metadata={} content='ATE' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0) metadata={} content='Two cities in North America are New York City and Toronto. TERMINATE' type='TextMessage'
messages=[TextMessage(source='user', models_usage=None, metadata={}, content='Name two cities in North America.', type='TextMessage'), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), metadata={}, content='Two cities in North America are New York City and Toronto. TERMINATE', type='TextMessage')] stop_reason=None


**Example 3: agent with tools**
Expand All @@ -312,9 +310,7 @@ async def main() -> None:
import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken


async def get_current_time() -> str:
Expand All @@ -327,12 +323,7 @@ async def main() -> None:
# api_key = "your_openai_api_key"
)
agent = AssistantAgent(name="assistant", model_client=model_client, tools=[get_current_time])

await Console(
agent.on_messages_stream(
[TextMessage(content="What is the current time?", source="user")], CancellationToken()
)
)
await Console(agent.run_stream(task="What is the current time?"))


asyncio.run(main())
Expand Down Expand Up @@ -390,9 +381,7 @@ async def main() -> None:
from typing import Literal

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_core.tools import FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel
Expand Down Expand Up @@ -430,7 +419,7 @@ def sentiment_analysis(text: str) -> str:


async def main() -> None:
stream = agent.on_messages_stream([TextMessage(content="I am happy today!", source="user")], CancellationToken())
stream = agent.run_stream(task="I am happy today!")
await Console(stream)


Expand Down Expand Up @@ -458,8 +447,6 @@ async def main() -> None:
import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_core.model_context import BufferedChatCompletionContext
from autogen_ext.models.openai import OpenAIChatCompletionClient

Expand All @@ -482,20 +469,14 @@ async def main() -> None:
system_message="You are a helpful assistant.",
)

response = await agent.on_messages(
[TextMessage(content="Name two cities in North America.", source="user")], CancellationToken()
)
print(response.chat_message.content) # type: ignore
result = await agent.run(task="Name two cities in North America.")
print(result.messages[-1].content) # type: ignore

response = await agent.on_messages(
[TextMessage(content="My favorite color is blue.", source="user")], CancellationToken()
)
print(response.chat_message.content) # type: ignore
result = await agent.run(task="My favorite color is blue.")
print(result.messages[-1].content) # type: ignore

response = await agent.on_messages(
[TextMessage(content="Did I ask you any question?", source="user")], CancellationToken()
)
print(response.chat_message.content) # type: ignore
result = await agent.run(task="Did I ask you any question?")
print(result.messages[-1].content) # type: ignore


asyncio.run(main())
Expand All @@ -518,8 +499,6 @@ async def main() -> None:
import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_core.memory import ListMemory, MemoryContent
from autogen_ext.models.openai import OpenAIChatCompletionClient

Expand All @@ -544,10 +523,8 @@ async def main() -> None:
system_message="You are a helpful assistant.",
)

response = await agent.on_messages(
[TextMessage(content="One idea for a dinner.", source="user")], CancellationToken()
)
print(response.chat_message.content) # type: ignore
result = await agent.run(task="What is a good dinner idea?")
print(result.messages[-1].content) # type: ignore


asyncio.run(main())
Expand All @@ -573,10 +550,8 @@ async def main() -> None:
.. code-block:: python

import asyncio
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage


async def main() -> None:
Expand All @@ -587,10 +562,8 @@ async def main() -> None:
# The system message is not supported by the o1 series model.
agent = AssistantAgent(name="assistant", model_client=model_client, system_message=None)

response = await agent.on_messages(
[TextMessage(content="What is the capital of France?", source="user")], CancellationToken()
)
print(response)
result = await agent.run(task="What is the capital of France?")
print(result.messages[-1].content) # type: ignore


asyncio.run(main())
Expand Down
40 changes: 7 additions & 33 deletions python/packages/autogen-core/docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,47 +46,21 @@ A framework for building AI agents and applications
::::{grid}
:gutter: 2

:::{grid-item-card}
:shadow: none
:margin: 2 0 0 0
:columns: 12 12 6 6

<div class="sd-card-title sd-font-weight-bold docutils">

{fas}`book;pst-color-primary`
Magentic-One CLI [![PyPi magentic-one-cli](https://img.shields.io/badge/PyPi-magentic--one--cli-blue?logo=pypi)](https://pypi.org/project/magentic-one-cli/)
</div>
A console-based multi-agent assistant for web and file-based tasks.
Built on AgentChat.

```bash
pip install -U magentic-one-cli
m1 "Find flights from Seattle to Paris and format the result in a table"
```

+++

```{button-ref} user-guide/agentchat-user-guide/magentic-one
:color: secondary

Get Started
```

:::

:::{grid-item-card} {fas}`palette;pst-color-primary` Studio [![PyPi autogenstudio](https://img.shields.io/badge/PyPi-autogenstudio-blue?logo=pypi)](https://pypi.org/project/autogenstudio/)
:shadow: none
:margin: 2 0 0 0
:columns: 12 12 6 6
:columns: 12 12 12 12

An app for prototyping and managing agents without writing code.
An web-based UI for prototyping with agents without writing code.
Built on AgentChat.

```bash
pip install -U autogenstudio
autogenstudio ui --port 8080 --appdir ./myapp
```

_Start here if you are new to AutoGen and want to prototype with agents without writing code._

+++

```{button-ref} user-guide/autogenstudio-user-guide/index
Expand Down Expand Up @@ -124,7 +98,7 @@ async def main() -> None:
asyncio.run(main())
```

_Start here if you are building conversational agents. [Migrating from AutoGen 0.2?](./user-guide/agentchat-user-guide/migration-guide.md)._
_Start here if you are prototyping with agents using Python. [Migrating from AutoGen 0.2?](./user-guide/agentchat-user-guide/migration-guide.md)._

+++

Expand All @@ -147,7 +121,7 @@ An event-driven programming framework for building scalable multi-agent AI syste
* Research on multi-agent collaboration.
* Distributed agents for multi-language applications.

_Start here if you are building workflows or distributed agent systems._
_Start here if you are getting serious about building multi-agent systems._

+++

Expand All @@ -167,7 +141,7 @@ Get Started
Implementations of Core and AgentChat components that interface with external services or other libraries.
You can find and use community extensions or create your own. Examples of built-in extensions:

* {py:class}`~autogen_ext.tools.langchain.LangChainToolAdapter` for using LangChain tools.
* {py:class}`~autogen_ext.tools.mcp.McpWorkbench` for using Model-Context Protocol (MCP) servers.
* {py:class}`~autogen_ext.agents.openai.OpenAIAssistantAgent` for using Assistant API.
* {py:class}`~autogen_ext.code_executors.docker.DockerCommandLineCodeExecutor` for running model-generated code in a Docker container.
* {py:class}`~autogen_ext.runtimes.grpc.GrpcWorkerAgentRuntime` for distributed agents.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_reset`: The abstract method that resets the agent to its initial state. This method is called when the agent is asked to reset itself.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible {py:class}`~autogen_agentchat.messages.BaseChatMessage` message types the agent can produce in its response.\n",
"\n",
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent. If this method is not implemented, the agent\n",
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent.\n",
"This method is called by {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream` to stream messages.\n",
"If this method is not implemented, the agent\n",
"uses the default implementation of {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`\n",
"that calls the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method and\n",
"yields all messages in the response."
Expand Down Expand Up @@ -731,7 +733,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
"version": "3.12.7"
}
},
"nbformat": 4,
Expand Down
Loading
Loading