-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Aegis dag #6197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aegis dag #6197
Conversation
… message Pydantic models.
Hi @abhinav-aegis , Thanks for putting this together, much appreciated. I am personally excited to see this take shape and ideally, it would be great to see it get to a point where we support it in AutoGen Studio. That said, as we improve it, what is a good minimum testable example we can use to help readers of this thread get a good sense of what is being accomplished. Will also provide some framework for expected behaviour. Perhaps something like the below (B and C below seem to not be responding with )? from autogen_ext.models.openai._openai_client import OpenAIChatCompletionClient
from autogen_ext.models.openai._openai_client import OpenAIChatCompletionClient
from autogen_agentchat.agents import (
AssistantAgent,
)
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams._group_chat._digraph_group_chat import DiGraphGroupChat, DiGraph, DiGraphNode, DiGraphEdge
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent_a = AssistantAgent("A", model_client=model_client, system_message="You are a helpful assistant.")
agent_b = AssistantAgent("B", model_client=model_client, system_message="You are a helpful spanish translator. Whenever you receive a message, translate it to Spanish and respond with the translation.")
agent_c = AssistantAgent("C", model_client=model_client, system_message="You are a helpful assistant markdown assistant. Whenever you receieve a message format it as markdown (use tables where appropriate) and respond with the formatted message.")
graph = DiGraph(
nodes={
"A": DiGraphNode(name="A", edges=[DiGraphEdge(target="B")]),
"B": DiGraphNode(name="B", edges=[DiGraphEdge(target="C")]),
"C": DiGraphNode(name="C", edges=[]),
}
)
team = DiGraphGroupChat(
participants=[agent_a, agent_b, agent_c],
graph=graph,
termination_condition=MaxMessageTermination(5),
)
stream = team.run_stream(task="Write a 3 line haiku poem about the amount of rainfail each month for california.")
async for message in stream:
print("********",message) Result TaskResult(messages=[TextMessage(source='user', models_usage=None, metadata={}, content='Write a 3 line haiku poem about the amount of rainfail each month for california.', type='TextMessage'), TextMessage(source='A', models_usage=RequestUsage(prompt_tokens=37, completion_tokens=21), metadata={}, content="Winter's soft whispers, \nSpring's vibrant blooms drink deeply, \nSummer's drought holds sway.", type='TextMessage'), TextMessage(source='B', models_usage=RequestUsage(prompt_tokens=79, completion_tokens=27), metadata={}, content='Susurros de invierno, \nLas flores vibrantes de primavera beben profundamente, \nLa sequía del verano prevalece.', type='TextMessage'), TextMessage(source='C', models_usage=RequestUsage(prompt_tokens=118, completion_tokens=64), metadata={}, content="### Haiku about California Rainfall\n\n**English:**\n\nWinter's soft whispers, \nSpring's vibrant blooms drink deeply, \nSummer's drought holds sway.\n\n---\n\n**Spanish:**\n\nSusurros de invierno, \nLas flores vibrantes de primavera beben profundamente, \nLa sequía del verano prevalece.", type='TextMessage')], stop_reason='The DiGraph chat has finished executing.') |
@victordibia Thanks a lot for your feedback. I will definitely include such an example. I think the decision last week at Office Hours was to create an extension and maintain this library as an extension. I will get to that later this week and when I do that, I will be sure to include such an example in the Documentation. |
@abhinav-aegis, @victordibia and I discussed and we think it maybe good for us to create an experimental module in AgentChat instead. |
I am okay either ways - as a community extension or an experimental module. I will respond in Discord - easier to have a quick conversation there. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6197 +/- ##
==========================================
+ Coverage 73.02% 77.53% +4.51%
==========================================
Files 306 202 -104
Lines 17591 14816 -2775
Branches 406 0 -406
==========================================
- Hits 12845 11487 -1358
+ Misses 4473 3329 -1144
+ Partials 273 0 -273
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the implementation for filtering can be changed by simply overriding the participant factory in BaseGroupChat. I can make the changes.
|
||
@event | ||
async def handle_agent_response(self, message: GroupChatAgentResponse, ctx: MessageContext) -> None: | ||
try: | ||
async with self._lock: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for the lock here, because the base class SequentialRoutedAgent
already serialize handling of GroupChatAgentResponse
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though we need to consider the case when we are expecting multiple concurrent responses from participants: we shouldn't proceed until we have collected all the responses we are expecting. otherwise, the flow may become uncontrollable.
@@ -266,6 +293,12 @@ async def select_speaker(self, thread: List[BaseAgentEvent | BaseChatMessage]) - | |||
topic type of the selected speaker.""" | |||
... | |||
|
|||
async def select_speakers(self, thread: List[BaseAgentEvent | BaseChatMessage]) -> List[str]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should just rename select_speaker
to select_speakers
, and update the signature so it always returns Sequence[str]
.
It is not used publically we can still change it.
@@ -31,10 +33,11 @@ class ChatAgentContainer(SequentialRoutedAgent): | |||
agent (ChatAgent): The agent to delegate message handling to. | |||
message_factory (MessageFactory): The message factory to use for | |||
creating messages from JSON data. | |||
sender_filter (List[AgentId] | None): A list of agent IDs to filter messages by sender. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to use filter. Instead, we pass the manager agent's topic type to parent_topic_type
, so participant's response sent to the manager agent.
Then, we set up the manager agent to act as the router for messages, based on the graph definition.
@ekzhu based on the request that I create a community extention, I have created a new Git repo here: https://github.com/abhinav-aegis/autogen-graph. I have a lot of new changes in the git repo since I created this pull request. It is really not clear to me any more where you would like me to contribute changes. This pull-request is outdated and should probably be closed so that we have a clean way to integrate the changes. |
Okay. Close this PR and continue the discussion based on your new code |
Why are these changes needed?
Related issue number
Checks
Helps close #4623 and #5131. I am creating a pull requested as suggested by @lspinheiro to be able to compare changes and provide comments. The code itself is not ready for merging.
You can find the different Graph Execution patterns in the test files: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/tests/test_digraph_group_chat.py#L596
Message filtering is implemented here: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_chat_agent_container.py#L116
Structured message schema deserialization is here: https://github.com/abhinav-aegis/autogen/blob/aegis-dag/python/packages/autogen-agentchat/src/autogen_agentchat/utils/_structured_message_utils.py
See several tests for the deserialization here: https://github.com/abhinav-aegis/autogen/blob/aegis-dag/python/packages/autogen-agentchat/tests/test_structured_message_utils.py
See StructuredMessageComponent here: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/src/autogen_agentchat/messages.py#L214
See serialization tests here: https://github.com/abhinav-aegis/autogen/blob/7ddfb088ac5a7da37d5af59dad92d6f216426169/python/packages/autogen-agentchat/tests/test_messages.py#L56