-
Notifications
You must be signed in to change notification settings - Fork 8.2k
Large Multimodal Models in AgentChat #554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
157a158
LMM Code added
BeibinLi aa45ec0
LLaVA notebook update
BeibinLi 2a47432
Test cases and Notebook modified for OpenAI v1
BeibinLi c568b9e
Move LMM into contrib
BeibinLi 2b442f2
LMM test setup update
BeibinLi 31b90d3
try...except... clause for LMM tests
BeibinLi b10f712
disable patch for llava agent test
BeibinLi 695a3a2
Add LMM Blog
BeibinLi b0969bd
Change docstring for LMM agents
BeibinLi 8e4222b
Docstring update patch
BeibinLi 73a6c06
llava: insert reply at position 1 now
BeibinLi 211b41c
Resolve comments
BeibinLi 7b8a1a0
Signature typo fix for LMM agent: system_message
BeibinLi e00692a
Update LMM "content" from latest OpenAI release
BeibinLi d812486
update LMM test according to latest OpenAI release
BeibinLi aa64852
Fully support GPT-4V now
BeibinLi d4f04c0
GPT-4V link updated in blog
BeibinLi 8d2e64b
Merge branch 'main' into lmm
sonichi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,60 @@ | ||
| # This workflow will install Python dependencies, run tests and lint with a variety of Python versions | ||
| # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions | ||
|
|
||
| name: ContribTests | ||
|
|
||
| on: | ||
| pull_request: | ||
| branches: ['main', 'dev/v0.2'] | ||
| paths: | ||
| - 'autogen/img_utils.py' | ||
| - 'autogen/agentchat/contrib/multimodal_conversable_agent.py' | ||
| - 'autogen/agentchat/contrib/llava_agent.py' | ||
| - 'test/test_img_utils.py' | ||
| - 'test/agentchat/contrib/test_lmm.py' | ||
| - 'test/agentchat/contrib/test_llava.py' | ||
| - '.github/workflows/lmm-test.yml' | ||
| - 'setup.py' | ||
|
|
||
| concurrency: | ||
| group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }} | ||
| cancel-in-progress: ${{ github.ref != 'refs/heads/main' }} | ||
|
|
||
| jobs: | ||
| LMMTest: | ||
|
|
||
| runs-on: ${{ matrix.os }} | ||
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| os: [ubuntu-latest, macos-latest, windows-2019] | ||
| python-version: ["3.8", "3.9", "3.10", "3.11"] | ||
| steps: | ||
| - uses: actions/checkout@v3 | ||
| - name: Set up Python ${{ matrix.python-version }} | ||
| uses: actions/setup-python@v4 | ||
| with: | ||
| python-version: ${{ matrix.python-version }} | ||
| - name: Install packages and dependencies for all tests | ||
| run: | | ||
| python -m pip install --upgrade pip wheel | ||
| pip install pytest | ||
| - name: Install packages and dependencies for LMM | ||
| run: | | ||
| pip install -e .[lmm] | ||
| pip uninstall -y openai | ||
| - name: Test LMM and LLaVA | ||
| run: | | ||
| pytest test/test_img_utils.py test/agentchat/contrib/test_lmm.py test/agentchat/contrib/test_llava.py | ||
| - name: Coverage | ||
| if: matrix.python-version == '3.10' | ||
| run: | | ||
| pip install coverage>=5.3 | ||
| coverage run -a -m pytest test/test_img_utils.py test/agentchat/contrib/test_lmm.py test/agentchat/contrib/test_llava.py | ||
| coverage xml | ||
| - name: Upload coverage to Codecov | ||
| if: matrix.python-version == '3.10' | ||
| uses: codecov/codecov-action@v3 | ||
| with: | ||
| file: ./coverage.xml | ||
| flags: unittests |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,178 @@ | ||
| import json | ||
| import logging | ||
| import os | ||
| import pdb | ||
| import re | ||
| from typing import Any, Dict, List, Optional, Tuple, Union | ||
|
|
||
| import replicate | ||
| import requests | ||
| from regex import R | ||
|
|
||
| from autogen.agentchat.agent import Agent | ||
| from autogen.agentchat.contrib.multimodal_conversable_agent import MultimodalConversableAgent | ||
| from autogen.code_utils import content_str | ||
| from autogen.img_utils import get_image_data, llava_formater | ||
|
|
||
| try: | ||
| from termcolor import colored | ||
| except ImportError: | ||
|
|
||
| def colored(x, *args, **kwargs): | ||
| return x | ||
|
|
||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
| # we will override the following variables later. | ||
| SEP = "###" | ||
|
|
||
| DEFAULT_LLAVA_SYS_MSG = "You are an AI agent and you can view images." | ||
|
|
||
|
|
||
| class LLaVAAgent(MultimodalConversableAgent): | ||
| def __init__( | ||
| self, | ||
| name: str, | ||
| system_message: Optional[Tuple[str, List]] = DEFAULT_LLAVA_SYS_MSG, | ||
| *args, | ||
| **kwargs, | ||
| ): | ||
| """ | ||
| Args: | ||
| name (str): agent name. | ||
| system_message (str): system message for the ChatCompletion inference. | ||
| Please override this attribute if you want to reprogram the agent. | ||
| **kwargs (dict): Please refer to other kwargs in | ||
| [ConversableAgent](../conversable_agent#__init__). | ||
| """ | ||
| super().__init__( | ||
| name, | ||
| system_message=system_message, | ||
| *args, | ||
| **kwargs, | ||
| ) | ||
|
|
||
| assert self.llm_config is not None, "llm_config must be provided." | ||
| self.register_reply([Agent, None], reply_func=LLaVAAgent._image_reply, position=1) | ||
|
|
||
| def _image_reply(self, messages=None, sender=None, config=None): | ||
| # Note: we did not use "llm_config" yet. | ||
|
|
||
| if all((messages is None, sender is None)): | ||
| error_msg = f"Either {messages=} or {sender=} must be provided." | ||
| logger.error(error_msg) | ||
| raise AssertionError(error_msg) | ||
|
|
||
| if messages is None: | ||
| messages = self._oai_messages[sender] | ||
|
|
||
| # The formats for LLaVA and GPT are different. So, we manually handle them here. | ||
| images = [] | ||
| prompt = content_str(self.system_message) + "\n" | ||
| for msg in messages: | ||
| role = "Human" if msg["role"] == "user" else "Assistant" | ||
| # pdb.set_trace() | ||
| images += [d["image_url"]["url"] for d in msg["content"] if d["type"] == "image_url"] | ||
| content_prompt = content_str(msg["content"]) | ||
| prompt += f"{SEP}{role}: {content_prompt}\n" | ||
| prompt += "\n" + SEP + "Assistant: " | ||
| images = [re.sub("data:image/.+;base64,", "", im, count=1) for im in images] | ||
| print(colored(prompt, "blue")) | ||
|
|
||
| out = "" | ||
| retry = 10 | ||
| while len(out) == 0 and retry > 0: | ||
| # image names will be inferred automatically from llava_call | ||
| out = llava_call_binary( | ||
| prompt=prompt, | ||
| images=images, | ||
| config_list=self.llm_config["config_list"], | ||
| temperature=self.llm_config.get("temperature", 0.5), | ||
| max_new_tokens=self.llm_config.get("max_new_tokens", 2000), | ||
| ) | ||
| retry -= 1 | ||
|
|
||
| assert out != "", "Empty response from LLaVA." | ||
|
|
||
| return True, out | ||
|
|
||
|
|
||
| def _llava_call_binary_with_config( | ||
| prompt: str, images: list, config: dict, max_new_tokens: int = 1000, temperature: float = 0.5, seed: int = 1 | ||
| ): | ||
| if config["base_url"].find("0.0.0.0") >= 0 or config["base_url"].find("localhost") >= 0: | ||
| llava_mode = "local" | ||
| else: | ||
| llava_mode = "remote" | ||
|
|
||
| if llava_mode == "local": | ||
| headers = {"User-Agent": "LLaVA Client"} | ||
| pload = { | ||
| "model": config["model"], | ||
| "prompt": prompt, | ||
| "max_new_tokens": max_new_tokens, | ||
| "temperature": temperature, | ||
| "stop": SEP, | ||
| "images": images, | ||
| } | ||
|
|
||
| response = requests.post( | ||
| config["base_url"].rstrip("/") + "/worker_generate_stream", headers=headers, json=pload, stream=False | ||
| ) | ||
|
|
||
| for chunk in response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b"\0"): | ||
| if chunk: | ||
| data = json.loads(chunk.decode("utf-8")) | ||
| output = data["text"].split(SEP)[-1] | ||
| elif llava_mode == "remote": | ||
| # The Replicate version of the model only support 1 image for now. | ||
| img = "data:image/jpeg;base64," + images[0] | ||
| response = replicate.run( | ||
| config["base_url"], input={"image": img, "prompt": prompt.replace("<image>", " "), "seed": seed} | ||
| ) | ||
| # The yorickvp/llava-13b model can stream output as it's running. | ||
| # The predict method returns an iterator, and you can iterate over that output. | ||
| output = "" | ||
| for item in response: | ||
| # https://replicate.com/yorickvp/llava-13b/versions/2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591/api#output-schema | ||
| output += item | ||
|
|
||
| # Remove the prompt and the space. | ||
| output = output.replace(prompt, "").strip().rstrip() | ||
| return output | ||
|
|
||
|
|
||
| def llava_call_binary( | ||
| prompt: str, images: list, config_list: list, max_new_tokens: int = 1000, temperature: float = 0.5, seed: int = 1 | ||
| ): | ||
| # TODO 1: add caching around the LLaVA call to save compute and cost | ||
| # TODO 2: add `seed` to ensure reproducibility. The seed is not working now. | ||
|
|
||
| for config in config_list: | ||
| try: | ||
| return _llava_call_binary_with_config(prompt, images, config, max_new_tokens, temperature, seed) | ||
| except Exception as e: | ||
| print(f"Error: {e}") | ||
| continue | ||
|
|
||
|
|
||
| def llava_call(prompt: str, llm_config: dict) -> str: | ||
| """ | ||
| Makes a call to the LLaVA service to generate text based on a given prompt | ||
| """ | ||
|
|
||
| prompt, images = llava_formater(prompt, order_image_tokens=False) | ||
|
|
||
| for im in images: | ||
| if len(im) == 0: | ||
| raise RuntimeError("An image is empty!") | ||
|
|
||
| return llava_call_binary( | ||
| prompt, | ||
| images, | ||
| config_list=llm_config["config_list"], | ||
| max_new_tokens=llm_config.get("max_new_tokens", 2000), | ||
| temperature=llm_config.get("temperature", 0.5), | ||
| seed=llm_config.get("seed", None), | ||
| ) |
107 changes: 107 additions & 0 deletions
107
autogen/agentchat/contrib/multimodal_conversable_agent.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,107 @@ | ||
| from typing import Any, Callable, Dict, List, Optional, Tuple, Union | ||
|
|
||
| from autogen import OpenAIWrapper | ||
| from autogen.agentchat import Agent, ConversableAgent | ||
| from autogen.img_utils import gpt4v_formatter | ||
|
|
||
| try: | ||
| from termcolor import colored | ||
| except ImportError: | ||
|
|
||
| def colored(x, *args, **kwargs): | ||
| return x | ||
|
|
||
|
|
||
| from autogen.code_utils import content_str | ||
|
|
||
| DEFAULT_LMM_SYS_MSG = """You are a helpful AI assistant.""" | ||
|
|
||
|
|
||
| class MultimodalConversableAgent(ConversableAgent): | ||
| def __init__( | ||
| self, | ||
| name: str, | ||
| system_message: Optional[Union[str, List]] = DEFAULT_LMM_SYS_MSG, | ||
| is_termination_msg: str = None, | ||
| *args, | ||
| **kwargs, | ||
| ): | ||
| """ | ||
| Args: | ||
| name (str): agent name. | ||
| system_message (str): system message for the OpenAIWrapper inference. | ||
| Please override this attribute if you want to reprogram the agent. | ||
| **kwargs (dict): Please refer to other kwargs in | ||
| [ConversableAgent](../conversable_agent#__init__). | ||
| """ | ||
| super().__init__( | ||
| name, | ||
| system_message, | ||
| is_termination_msg=is_termination_msg, | ||
| *args, | ||
| **kwargs, | ||
| ) | ||
|
|
||
| self.update_system_message(system_message) | ||
| self._is_termination_msg = ( | ||
| is_termination_msg | ||
| if is_termination_msg is not None | ||
| else (lambda x: any([item["text"] == "TERMINATE" for item in x.get("content") if item["type"] == "text"])) | ||
| ) | ||
|
|
||
| @property | ||
| def system_message(self) -> List: | ||
| """Return the system message.""" | ||
| return self._oai_system_message[0]["content"] | ||
|
|
||
| def update_system_message(self, system_message: Union[Dict, List, str]): | ||
| """Update the system message. | ||
|
|
||
| Args: | ||
| system_message (str): system message for the OpenAIWrapper inference. | ||
| """ | ||
| self._oai_system_message[0]["content"] = self._message_to_dict(system_message)["content"] | ||
| self._oai_system_message[0]["role"] = "system" | ||
|
|
||
| @staticmethod | ||
| def _message_to_dict(message: Union[Dict, List, str]): | ||
| """Convert a message to a dictionary. | ||
|
|
||
| The message can be a string or a dictionary. The string will be put in the "content" field of the new dictionary. | ||
| """ | ||
| if isinstance(message, str): | ||
| return {"content": gpt4v_formatter(message)} | ||
| if isinstance(message, list): | ||
| return {"content": message} | ||
| else: | ||
| return message | ||
|
|
||
| def _print_received_message(self, message: Union[Dict, str], sender: Agent): | ||
| # print the message received | ||
| print(colored(sender.name, "yellow"), "(to", f"{self.name}):\n", flush=True) | ||
| if message.get("role") == "function": | ||
| func_print = f"***** Response from calling function \"{message['name']}\" *****" | ||
| print(colored(func_print, "green"), flush=True) | ||
| print(content_str(message["content"]), flush=True) | ||
| print(colored("*" * len(func_print), "green"), flush=True) | ||
| else: | ||
| content = message.get("content") | ||
| if content is not None: | ||
| if "context" in message: | ||
| content = OpenAIWrapper.instantiate( | ||
| content, | ||
| message["context"], | ||
| self.llm_config and self.llm_config.get("allow_format_str_template", False), | ||
| ) | ||
| print(content_str(content), flush=True) | ||
| if "function_call" in message: | ||
| func_print = f"***** Suggested function Call: {message['function_call'].get('name', '(No function name found)')} *****" | ||
| print(colored(func_print, "green"), flush=True) | ||
| print( | ||
| "Arguments: \n", | ||
| message["function_call"].get("arguments", "(No arguments found)"), | ||
| flush=True, | ||
| sep="", | ||
| ) | ||
| print(colored("*" * len(func_print), "green"), flush=True) | ||
| print("\n", "-" * 80, flush=True, sep="") | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.