Skip to content

Aegis dag #6197

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 45 commits into from
Closed

Aegis dag #6197

wants to merge 45 commits into from

Conversation

abhinav-aegis
Copy link
Contributor

Why are these changes needed?

Related issue number

Checks

Helps close #4623 and #5131. I am creating a pull requested as suggested by @lspinheiro to be able to compare changes and provide comments. The code itself is not ready for merging.

You can find the different Graph Execution patterns in the test files: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/tests/test_digraph_group_chat.py#L596

Message filtering is implemented here: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_chat_agent_container.py#L116

Structured message schema deserialization is here: https://github.com/abhinav-aegis/autogen/blob/aegis-dag/python/packages/autogen-agentchat/src/autogen_agentchat/utils/_structured_message_utils.py

See several tests for the deserialization here: https://github.com/abhinav-aegis/autogen/blob/aegis-dag/python/packages/autogen-agentchat/tests/test_structured_message_utils.py

See StructuredMessageComponent here: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/src/autogen_agentchat/messages.py#L214

See serialization tests here: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/tests/test_messages.py#L56

Sorry, something went wrong.

ekzhu and others added 30 commits March 18, 2025 19:46

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
ekzhu and others added 14 commits March 25, 2025 14:05
… message Pydantic models.
@victordibia
Copy link
Collaborator

victordibia commented Apr 4, 2025

Hi @abhinav-aegis ,

Thanks for putting this together, much appreciated. I am personally excited to see this take shape and ideally, it would be great to see it get to a point where we support it in AutoGen Studio.

That said, as we improve it, what is a good minimum testable example we can use to help readers of this thread get a good sense of what is being accomplished. Will also provide some framework for expected behaviour.

Perhaps something like the below (B and C below seem to not be responding with )?

from autogen_ext.models.openai._openai_client import OpenAIChatCompletionClient
from autogen_ext.models.openai._openai_client import OpenAIChatCompletionClient
from autogen_agentchat.agents import (
    AssistantAgent,
)
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams._group_chat._digraph_group_chat import DiGraphGroupChat, DiGraph, DiGraphNode, DiGraphEdge 


model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent_a = AssistantAgent("A", model_client=model_client, system_message="You are a helpful assistant.")
agent_b = AssistantAgent("B", model_client=model_client, system_message="You are a helpful spanish translator. Whenever you receive a message, translate it to Spanish and respond with the translation.")
agent_c = AssistantAgent("C", model_client=model_client, system_message="You are a helpful assistant markdown assistant. Whenever you receieve a message format it as markdown (use tables where appropriate) and respond with the formatted message.")

graph = DiGraph(
    nodes={
        "A": DiGraphNode(name="A", edges=[DiGraphEdge(target="B")]),
        "B": DiGraphNode(name="B", edges=[DiGraphEdge(target="C")]),
        "C": DiGraphNode(name="C", edges=[]),
    }
)

team = DiGraphGroupChat(
    participants=[agent_a, agent_b, agent_c],
    graph=graph,
    termination_condition=MaxMessageTermination(5),
)
 
stream =  team.run_stream(task="Write a 3 line haiku poem about the amount of rainfail each month for california.")
async for message in stream:
    print("********",message) 

Result

TaskResult(messages=[TextMessage(source='user', models_usage=None, metadata={}, content='Write a 3 line haiku poem about the amount of rainfail each month for california.', type='TextMessage'), TextMessage(source='A', models_usage=RequestUsage(prompt_tokens=37, completion_tokens=21), metadata={}, content="Winter's soft whispers,  \nSpring's vibrant blooms drink deeply,  \nSummer's drought holds sway.", type='TextMessage'), TextMessage(source='B', models_usage=RequestUsage(prompt_tokens=79, completion_tokens=27), metadata={}, content='Susurros de invierno,  \nLas flores vibrantes de primavera beben profundamente,  \nLa sequía del verano prevalece.', type='TextMessage'), TextMessage(source='C', models_usage=RequestUsage(prompt_tokens=118, completion_tokens=64), metadata={}, content="### Haiku about California Rainfall\n\n**English:**\n\nWinter's soft whispers,  \nSpring's vibrant blooms drink deeply,  \nSummer's drought holds sway.\n\n---\n\n**Spanish:**\n\nSusurros de invierno,  \nLas flores vibrantes de primavera beben profundamente,  \nLa sequía del verano prevalece.", type='TextMessage')], stop_reason='The DiGraph chat has finished executing.')

@abhinav-aegis
Copy link
Contributor Author

Hi @abhinav-aegis ,

Thanks for putting this together, much appreciated. I am personally excited to see this take shape and ideally, it would be great to see it get to a point where we support it in AutoGen Studio.

That said, as we improve it, what is a good minimum testable example we can use to help readers of this thread get a good sense of what is being accomplished. Will also provide some framework for expected behaviour.

Perhaps something like the below (B and C below seem to not be responding with )?

from autogen_ext.models.openai._openai_client import OpenAIChatCompletionClient
from autogen_ext.models.openai._openai_client import OpenAIChatCompletionClient
from autogen_agentchat.agents import (
    AssistantAgent,
)
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams._group_chat._digraph_group_chat import DiGraphGroupChat, DiGraph, DiGraphNode, DiGraphEdge 


model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent_a = AssistantAgent("A", model_client=model_client, system_message="You are a helpful assistant.")
agent_b = AssistantAgent("B", model_client=model_client, system_message="You are a helpful spanish translator. Whenever you receive a message, translate it to Spanish and respond with the translation.")
agent_c = AssistantAgent("C", model_client=model_client, system_message="You are a helpful assistant markdown assistant. Whenever you receieve a message format it as markdown (use tables where appropriate) and respond with the formatted message.")

graph = DiGraph(
    nodes={
        "A": DiGraphNode(name="A", edges=[DiGraphEdge(target="B")]),
        "B": DiGraphNode(name="B", edges=[DiGraphEdge(target="C")]),
        "C": DiGraphNode(name="C", edges=[]),
    }
)

team = DiGraphGroupChat(
    participants=[agent_a, agent_b, agent_c],
    graph=graph,
    termination_condition=MaxMessageTermination(5),
)
 
stream =  team.run_stream(task="Write a 3 line haiku poem about the amount of rainfail each month for california.")
async for message in stream:
    print("********",message) 

Result

TaskResult(messages=[TextMessage(source='user', models_usage=None, metadata={}, content='Write a 3 line haiku poem about the amount of rainfail each month for california.', type='TextMessage'), TextMessage(source='A', models_usage=RequestUsage(prompt_tokens=37, completion_tokens=21), metadata={}, content="Winter's soft whispers,  \nSpring's vibrant blooms drink deeply,  \nSummer's drought holds sway.", type='TextMessage'), TextMessage(source='B', models_usage=RequestUsage(prompt_tokens=79, completion_tokens=27), metadata={}, content='Susurros de invierno,  \nLas flores vibrantes de primavera beben profundamente,  \nLa sequía del verano prevalece.', type='TextMessage'), TextMessage(source='C', models_usage=RequestUsage(prompt_tokens=118, completion_tokens=64), metadata={}, content="### Haiku about California Rainfall\n\n**English:**\n\nWinter's soft whispers,  \nSpring's vibrant blooms drink deeply,  \nSummer's drought holds sway.\n\n---\n\n**Spanish:**\n\nSusurros de invierno,  \nLas flores vibrantes de primavera beben profundamente,  \nLa sequía del verano prevalece.", type='TextMessage')], stop_reason='The DiGraph chat has finished executing.')

@victordibia Thanks a lot for your feedback. I will definitely include such an example. I think the decision last week at Office Hours was to create an extension and maintain this library as an extension. I will get to that later this week and when I do that, I will be sure to include such an example in the Documentation.

@ekzhu
Copy link
Collaborator

ekzhu commented Apr 11, 2025

@abhinav-aegis, @victordibia and I discussed and we think it maybe good for us to create an experimental module in AgentChat instead.

@abhinav-aegis
Copy link
Contributor Author

@abhinav-aegis, @victordibia and I discussed and we think it maybe good for us to create an experimental module in AgentChat instead.

I am okay either ways - as a community extension or an experimental module. I will respond in Discord - easier to have a quick conversation there.

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Copy link

codecov bot commented Apr 15, 2025

Codecov Report

Attention: Patch coverage is 88.88889% with 41 lines in your changes missing coverage. Please review.

Project coverage is 77.53%. Comparing base (71b7429) to head (754a20c).

Files with missing lines Patch % Lines
...agentchat/teams/_group_chat/_digraph_group_chat.py 85.45% 32 Missing ⚠️
...togen_agentchat/utils/_structured_message_utils.py 94.31% 5 Missing ⚠️
...utogen-agentchat/src/autogen_agentchat/messages.py 92.85% 2 Missing ⚠️
...chat/teams/_group_chat/_base_group_chat_manager.py 92.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6197      +/-   ##
==========================================
+ Coverage   73.02%   77.53%   +4.51%     
==========================================
  Files         306      202     -104     
  Lines       17591    14816    -2775     
  Branches      406        0     -406     
==========================================
- Hits        12845    11487    -1358     
+ Misses       4473     3329    -1144     
+ Partials      273        0     -273     
Flag Coverage Δ
unittests 77.53% <88.88%> (+4.51%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Collaborator

@ekzhu ekzhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the implementation for filtering can be changed by simply overriding the participant factory in BaseGroupChat. I can make the changes.


@event
async def handle_agent_response(self, message: GroupChatAgentResponse, ctx: MessageContext) -> None:
try:
async with self._lock:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for the lock here, because the base class SequentialRoutedAgent already serialize handling of GroupChatAgentResponse.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though we need to consider the case when we are expecting multiple concurrent responses from participants: we shouldn't proceed until we have collected all the responses we are expecting. otherwise, the flow may become uncontrollable.

@@ -266,6 +293,12 @@ async def select_speaker(self, thread: List[BaseAgentEvent | BaseChatMessage]) -
topic type of the selected speaker."""
...

async def select_speakers(self, thread: List[BaseAgentEvent | BaseChatMessage]) -> List[str]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should just rename select_speaker to select_speakers, and update the signature so it always returns Sequence[str].

It is not used publically we can still change it.

@@ -31,10 +33,11 @@ class ChatAgentContainer(SequentialRoutedAgent):
agent (ChatAgent): The agent to delegate message handling to.
message_factory (MessageFactory): The message factory to use for
creating messages from JSON data.
sender_filter (List[AgentId] | None): A list of agent IDs to filter messages by sender.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to use filter. Instead, we pass the manager agent's topic type to parent_topic_type, so participant's response sent to the manager agent.

Then, we set up the manager agent to act as the router for messages, based on the graph definition.

@abhinav-aegis
Copy link
Contributor Author

@ekzhu based on the request that I create a community extention, I have created a new Git repo here: https://github.com/abhinav-aegis/autogen-graph. I have a lot of new changes in the git repo since I created this pull request. It is really not clear to me any more where you would like me to contribute changes. This pull-request is outdated and should probably be closed so that we have a clean way to integrate the changes.

@ekzhu
Copy link
Collaborator

ekzhu commented Apr 16, 2025

Okay. Close this PR and continue the discussion based on your new code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable True Graph-Based Execution flow Pattern in AgentChat
3 participants