-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Experimental] Modality Transforms #2836
Conversation
Lovely! |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2836 +/- ##
===========================================
- Coverage 33.12% 12.24% -20.89%
===========================================
Files 88 91 +3
Lines 9518 9775 +257
Branches 2037 2095 +58
===========================================
- Hits 3153 1197 -1956
- Misses 6096 8565 +2469
+ Partials 269 13 -256
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
️✅ There are no secrets present in this pull request anymore.If these secrets were true positive and are still valid, we highly recommend you to revoke them. 🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request. |
Closing this as we will consider porting message transform to autogen-agentchat 0.4 |
Why are these changes needed?
NOTE: Do not review, I have not finished this feature just yet.
With the introduction of GPT-4o, we should expect an increase in interest in multimodal capabilities in AutoGen. This PR introduces a new transform that allows users to add image modality to any agent, with any image captioner, and will serve as the blueprint for other modalities.
Current State of Multimodality Support in AutoGen
MultimodalConversableAgent
andVisionCapability
if they want to add image support to their agents. Both of which are LLM-based image captioning.My requirements for adding multimodality support to agents:
Approaches Considered
I was contemplating between two approaches:
ModalityAdapters
: A new agent capability that sits in front of every incoming message and converts them from one modality type to another (primarily to text).Modality Transforms
: Use theTransformMessages
capability to convert messages from one modality to another.Initially, I was working on
ModalityAdapters
as it seemed promising (I documented my thought process in this pdf, I initially called itModalityTranslators
but we voted for the adapter naming convention as it fit better). However, I encountered few roadblocks that led to the decision to useTransformMessages
instead:ModalityAdapters
seemed too close toTransformMessages
, leading to unnecessary repeated code.Using
TransformMessages
is more verbose, but it has several advantages:Tasks to complete before opening this PR for review.
Tasks
Things I noticed in the codebase that made it difficult to add new modalities
Demo
Here's a screenshot of a GPT-3.5 (named "gpt_3_w_image_modality") that can identify the animal generated by Dalle 3 (ignore the double messages, Groupchat doesn't work with transform messages just yet so I had to hack it).
Here's the code that I used to test
Related issue number
Checks