Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bren #90

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open

Bren #90

wants to merge 11 commits into from

Conversation

BrennanOwYong
Copy link
Collaborator

@BrennanOwYong BrennanOwYong commented Nov 4, 2024

Why are these changes needed?

I see that the code that handles the activation of nested chats only accepts the a Callable that only takes 1 parameter, the sender

elif isinstance(trigger, Callable): rst = trigger(sender) assert isinstance(rst, bool), f"trigger {trigger} must return a boolean value." return rst

I want to give devs more flexibility to decide when to have Agents initiate nested chats.
EG.1 (Poor example due to code runner already existing as a part of some old version of GPT)
the user proxy tells a code_writer agent that they want a script for something. The code_writer can write the script, but to test that script it needs a nested chat with a code_runner agent (without LLM, of course) to verify the output and show the user. Sure, there is an option for a group chat where conversation patterns are cyclic (as below)
User_Proxy > Code_Writer > Code_Tester: Rinse and Repeat
But end-users might want the experience to be more conversational, where they can converse with the Code_Writer a little more before sending it to do the work.

EG.2
Let's say I want to know how which farmlands to invest in based on satellite images and historical weather conditions. I would have an agent to plan out investment strategies (Planner_Agent) speaking to an Executor_Agent agent that will create the automation to do the calculations based on those strategies. However, the agent that develops the straategy realises that its data is incomplete, something the developer did not know. Therefore, since the developer didn'tknow this, the agent workflow they created only has 2 agents, the Planner_Agent and the Executor_Agent. The developer does not know it yet, but the Execturo_Agent will need a Data_Acquisition_Agent to go out and collect more data. I'm basically refering to a system that can identify its own limitations and write add-ons for itself.

The example above is something I personaly faced because i wanted to leverage Autogen's pre-existing support for code execution and peer-to-peer to build an intelligent hive that would take an instruction from the top and autonomously "run-wild" to get it done.

One a personal note, I feel like "pre-setting" the conversation patterns during runtime makes applications built with it feel more like a Zapier automation workflow instead of an intelligent app that truly makes use of the potential of LLMS. To me, it is comparable to giving someone a Jeep and only telling them to drive it on nice flat paved roads when a Jeep is already built to handle rough terrain.

Related issue number

Checks

@qingyun-wu
Copy link

Thanks for the PR. I see that there are code formatting issues. Could you please run pre-commit to fix them? Thank you!

@sonichi
Copy link
Collaborator

sonichi commented Nov 4, 2024

@BrennanOwYong Thanks for the PR. @LinxinS97 @LeoLjl @JieyuZ2 might be interested in this PR as they have worked in similar directions. Would you like to discuss this together on Discord? If so, could you please share your Discord name? @BrennanOwYong

@BrennanOwYong
Copy link
Collaborator Author

Bingus Bongus
dog_nose_enjoyer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants