Skip to content

Latest commit

 

History

History
631 lines (561 loc) · 68.5 KB

paper_benchmark.md

File metadata and controls

631 lines (561 loc) · 68.5 KB

Papers with Keyword: benchmark

  • The BrowserGym Ecosystem for Web Agent Research

    • Thibault Le Sellier De Chezelles, Maxime Gasse, Alexandre Drouin, Massimo Caccia, Léo Boisvert, Megh Thakkar, Tom Marty, Rim Assouel, Sahar Omidi Shayegan, Lawrence Keunho Jang, Xing Han Lù, Ori Yoran, Dehan Kong, Frank F. Xu, Siva Reddy, Quentin Cappart, Graham Neubig, Ruslan Salakhutdinov, Nicolas Chapados, Alexandre Lacoste
    • 🏛️ Institutions: ServiceNow Research, Mila, Polytechnique Montréal, CMU, McGill University, Tel Aviv University, Université de Montréal, iMean AI
    • 📅 Date: December 6, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [framework], [LLM], [automation], [BrowserGym], [AgentLab]
    • 📖 TLDR: This paper presents BrowserGym, an ecosystem designed to standardize the evaluation and benchmarking of web agents, particularly those leveraging Large Language Models (LLMs). It addresses the challenges posed by fragmented benchmarks and inconsistent methodologies in web agent research. BrowserGym provides a unified, gym-like environment with clearly defined observation and action spaces, enabling reproducible comparisons across various benchmarks. Additionally, AgentLab, a complementary framework, supports agent creation, testing, and analysis. The paper also features a large-scale experiment comparing the performance of 6 leading LLMs, highlighting the strengths and weaknesses of different models in real-world web tasks, while emphasizing the ongoing challenges in building efficient and robust web agents.
  • AndroidLab: Training and Systematic Benchmarking of Android Autonomous Agents

    • Yifan Xu, Xiao Liu, Xueqiao Sun, Siyi Cheng, Hao Yu, Hanyu Lai, Shudan Zhang, Dan Zhang, Jie Tang, Yuxiao Dong
    • 🏛️ Institutions: Tsinghua University, Peking University
    • 📅 Date: October 31, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [framework], [dataset], [benchmark], [AndroidLab]
    • 📖 TLDR: This paper introduces AndroidLab, a comprehensive framework for training and systematically benchmarking Android autonomous agents. It provides an operational environment with diverse modalities and action spaces, supporting both large language models (LLMs) and multimodal models (LMMs). The benchmark includes 138 tasks across nine apps on predefined Android virtual devices. Utilizing AndroidLab, the authors developed an Android Instruction dataset and trained six open-source LLMs and LMMs, significantly improving their average success rates.
  • OS-ATLAS: A Foundation Action Model For Generalist GUI Agents

    • Zhiyong Wu, Zhenyu Wu, Fangzhi Xu, Yian Wang, Qiushi Sun, Chengyou Jia, Kanzhi Cheng, Zichen Ding, Liheng Chen, Paul Pu Liang, Yu Qiao
    • 🏛️ Institutions: Shanghai AI Lab, Shanghai Jiaotong University, HKU, MIT
    • 📅 Date: October 30, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [GUI]
    • 🔑 Key: [model], [dataset], [benchmark], [OS-Atlas]
    • 📖 TLDR: This paper introduces OS-Atlas, a foundational GUI action model designed to enhance GUI grounding and out-of-distribution tasks. The authors developed a toolkit to synthesize multi-platform GUI grounding data, resulting in a cross-platform corpus of over 13 million GUI elements. OS-Atlas demonstrates significant performance improvements across six benchmarks spanning mobile, desktop, and web platforms.
  • Evaluating Cultural and Social Awareness of LLM Web Agents

    • Haoyi Qiu, Alexander R. Fabbri, Divyansh Agarwal, Kung-Hsiang Huang, Sarah Tan, Nanyun Peng, Chien-Sheng Wu
    • 🏛️ Institutions: UCLA, Salesforce AI Research
    • 📅 Date: October 30, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [CASA], [cultural awareness], [social awareness], [fine-tuning], [prompting]
    • 📖 TLDR: This paper introduces CASA, a benchmark designed to assess the cultural and social awareness of LLM web agents in tasks like online shopping and social discussion forums. It evaluates agents' abilities to detect and appropriately respond to norm-violating user queries and observations. The study finds that current LLM agents have limited cultural and social awareness, with less than 10% awareness coverage and over 40% violation rates. To enhance performance, the authors explore prompting and fine-tuning methods, demonstrating that combining both can offer complementary advantages.
  • Beyond Browsing: API-Based Web Agents

    • Yueqi Song, Frank Xu, Shuyan Zhou, Graham Neubig
    • 🏛️ Institutions: CMU
    • 📅 Date: October 24, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [API-based agent], [hybrid agent], [benchmark], [WebArena], [SOTA performance]
    • 📖 TLDR: This paper introduces API-based and hybrid agents designed to execute online tasks by accessing both APIs and traditional web browsing interfaces. In evaluations using WebArena, a benchmark for web navigation, the API-based agent achieves higher performance than browser-based agents, and the hybrid model achieves a success rate of 35.8%, setting a new state-of-the-art (SOTA) in task-agnostic web navigation. The findings highlight the efficiency and reliability gains of API interactions for web agents.
  • AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant

    • Chengyou Jia, Minnan Luo, Zhuohang Dang, Qiushi Sun, Fangzhi Xu, Junlin Hu, Tianbao Xie, Zhiyong Wu
    • 🏛️ Institutions: XJTU, Shanghai AI Lab, HKU
    • 📅 Date: October 24, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [GUI]
    • 🔑 Key: [framework], [multi-agent systems], [specialized generalist agent], [OSWorld benchmark]
    • 📖 TLDR: AgentStore introduces a scalable platform to integrate and manage heterogeneous agents, designed to enhance generalist assistant capabilities for diverse computer tasks. Using a MetaAgent and AgentToken strategy, AgentStore shows improved generalization on the OSWorld benchmark.
  • VideoWebArena: Evaluating Long Context Multimodal Agents with Video Understanding Web Tasks

    • Lawrence Jang, Yinheng Li, Charles Ding, Justin Lin, Paul Pu Liang, Dan Zhao, Rogerio Bonatti, Kazuhito Koishida
    • 🏛️ Institutions: CMU, MIT, NYU, Microsoft
    • 📅 Date: October 24, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [dataset], [video understanding], [long-context], [VideoWA]
    • 📖 TLDR: This paper introduces VideoWebArena (VideoWA), a benchmark assessing multimodal agents in video-based tasks. It features over 2,000 tasks focused on skill and factual retention, using video tutorials to simulate long-context environments. Results highlight current challenges in agentic abilities, providing a critical testbed for long-context video understanding improvements.
  • MobileSafetyBench: Evaluating Safety of Autonomous Agents in Mobile Device Control

    • Juyong Lee, Dongyoon Hahm, June Suk Choi, W. Bradley Knox, Kimin Lee
    • 🏛️ Institutions: KAIST, UT at Austin
    • 📅 Date: October 23, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [benchmark], [safety], [evaluation], [Android emulator]
    • 📖 TLDR: MobileSafetyBench introduces a benchmark for evaluating the safety of large language model (LLM)-based autonomous agents in mobile device control. Using Android emulators, the benchmark simulates real-world tasks in apps such as messaging and banking to assess agents' safety and helpfulness. The safety-focused tasks test for privacy risk management and robustness against adversarial prompt injections. Experiments show agents perform well in helpful tasks but struggle with safety-related challenges, underscoring the need for continued advancements in mobile safety mechanisms for autonomous agents.
  • Large Language Models Empowered Personalized Web Agents

    • Hongru Cai, Yongqi Li, Wenjie Wang, Fengbin Zhu, Xiaoyu Shen, Wenjie Li, Tat-Seng Chua
    • 🏛️ Institutions: HK PolyU, NTU Singapore
    • 📅 Date: Oct 22, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [benchmark], [personalized web agent], [user behavior alignment], [memory-enhanced alignment]
    • 📖 TLDR: This paper proposes a novel framework, Personalized User Memory-enhanced Alignment (PUMA), enabling large language models to serve as personalized web agents by incorporating user-specific data and historical web interactions. The authors also introduce a benchmark, PersonalWAB, to evaluate these agents on various personalized web tasks. Results show that PUMA improves web agent performance by optimizing action execution based on user-specific preferences.
  • AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?

    • Ori Yoran, Samuel Joseph Amouyal, Chaitanya Malaviya, Ben Bogin, Ofir Press, Jonathan Berant
    • 🏛️ Institutions: Tel Aviv University
    • 📅 Date: October 21, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [dataset], [planning and reasoning]
    • 📖 TLDR: AssistantBench is a benchmark designed to test the abilities of web agents in completing time-intensive, realistic web-based tasks. Covering 214 tasks across various domains, the benchmark introduces the SPA (See-Plan-Act) framework to handle multi-step planning and memory retention. AssistantBench emphasizes realistic task completion, showing that current agents achieve only modest success, with significant improvements needed for complex information synthesis and execution across multiple web domains.
  • SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation

    • Jingxuan Chen, Derek Yuen, Bin Xie, Yuhao Yang, Gongwei Chen, Zhihao Wu, Li Yixing, Xurui Zhou, Weiwen Liu, Shuai Wang, Rui Shao, Liqiang Nie, Yasheng Wang, Jianye Hao, Jun Wang, Kun Shao
    • 🏛️ Institutions: Huawei Noah’s Ark Lab, Harbin Institute of Technology, Shenzhen, UCL
    • 📅 Date: October 19, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [benchmark], [AI agent], [smartphone control], [framework]
    • 📖 TLDR: SPA-Bench is introduced as a benchmark designed to evaluate multimodal large language model (MLLM)-based smartphone agents, offering a task set that spans common smartphone functionalities across system and third-party applications. It includes a plug-and-play framework for real-time agent interactions on Android, integrating over ten agents with an adaptable evaluation pipeline measuring success across diverse metrics. Through this, the benchmark exposes challenges such as UI interpretation, action grounding, and memory retention in mobile environments, advancing research in smartphone-based agent applications.
  • AutoWebGLM: A Large Language Model-based Web Navigating Agent

    • Hanyu Lai, Xiao Liu, Iat Long Iong, Shuntian Yao, Yuxuan Chen, Pengbo Shen, Hao Yu, Hanchen Zhang, Xiaohan Zhang, Yuxiao Dong, Jie Tang
    • 🏛️ Institutions: THU, OSU
    • 📅 Date: October 12, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [dataset], [benchmark], [reinforcement learning]
    • 📖 TLDR: AutoWebGLM introduces a web navigation agent based on ChatGLM3-6B, designed to autonomously navigate and interact with webpages for complex tasks. The paper highlights a two-phase data construction approach using a hybrid human-AI methodology for diverse, curriculum-based web task training. It also presents AutoWebBench, a benchmark for evaluating agent performance in web tasks, and uses reinforcement learning to fine-tune operations, addressing complex webpage interaction and grounding.
  • ST-WebAgentBench: A Benchmark for Evaluating Safety and Trustworthiness in Web Agents

    • Ido Levy, Ben Wiesel, Sami Marreed, Alon Oved, Avi Yaeli, Segev Shlomov
    • 🏛️ Institutions: IBM Research
    • 📅 Date: October 9, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [safety], [trustworthiness], [ST-WebAgentBench]
    • 📖 TLDR: This paper introduces ST-WebAgentBench, a benchmark designed to evaluate the safety and trustworthiness of web agents in enterprise contexts. It defines safe and trustworthy agent behavior, outlines the structure of safety policies, and introduces the "Completion under Policies" metric to assess agent performance. The study reveals that current state-of-the-art agents struggle with policy adherence, highlighting the need for improved policy awareness and compliance in web agents.
  • ClickAgent: Enhancing UI Location Capabilities of Autonomous Agents

    • Jakub Hoscilowicz, Bartosz Maj, Bartosz Kozakiewicz, Oleksii Tymoschuk, Artur Janicki
    • 🏛️ Institutions: Samsung R&D Poland, Warsaw University of Technology
    • 📅 Date: October 9, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [framework], [model], [SeeClick], [AITW benchmark]
    • 📖 TLDR: The paper introduces ClickAgent, a framework that enhances autonomous agents' interaction with mobile UIs by improving their ability to locate interface elements accurately. This is achieved through a dual-component system where an MLLM performs reasoning and action planning, while a dedicated UI location model (e.g., SeeClick) handles element identification. ClickAgent, evaluated on the AITW benchmark and tested on both emulators and real Android devices, surpasses other agents like CogAgent and AppAgent in task success rate, advancing automation reliability on mobile platforms.
  • Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale

    • Rogerio Bonatti, Dan Zhao, Francesco Bonacci, Dillon Dupont, Sara Abdali, Yinheng Li, Yadong Lu, Justin Wagle, Kazuhito Koishida, Arthur Bucker, Lawrence Jang, Zack Hui
    • 🏛️ Institutions: Microsoft
    • 📅 Date: September 13, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Desktop]
    • 🔑 Key: [framework], [benchmark], [Navi]
    • 📖 TLDR: This paper introduces the Windows Agent Arena (WAA), a scalable platform for testing and benchmarking multi-modal AI agents within a realistic Windows OS environment. WAA enables researchers to evaluate agentic workflows across diverse tasks and supports large-scale deployment using Azure ML. The study also presents Navi, a multi-modal agent achieving a 19.5% success rate on Windows tasks, highlighting the platform's potential for advancing AI agent development.
  • From Grounding to Planning: Benchmarking Bottlenecks in Web Agents

    • Segev Shlomov, Ben Wiesel, Aviad Sela, Ido Levy, Liane Galanti, Roy Abitbol
    • 🏛️ Institutions: IBM
    • 📅 Date: September 3, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [planning], [grounding], [Mind2Web dataset], [web navigation]
    • 📖 TLDR: This paper analyzes performance bottlenecks in web agents by separately evaluating grounding and planning tasks, isolating their individual impacts on navigation efficacy. Using an enhanced version of the Mind2Web dataset, the study reveals planning as a significant bottleneck, with advancements in grounding and task-specific benchmarking for elements like UI component recognition. Through experimental adjustments, the authors propose a refined evaluation framework, aiming to enhance web agents' contextual adaptability and accuracy in complex web environments.
  • VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents

    • Xiao Liu, Tianjie Zhang, Yu Gu, Iat Long Iong, Yifan Xu, Xixuan Song, Shudan Zhang, Hanyu Lai, Xinyi Liu, Hanlin Zhao, Jiadai Sun, Xinyue Yang, Yu Yang, Zehan Qi, Shuntian Yao, Xueqiao Sun, Siyi Cheng, Qinkai Zheng, Hao Yu, Hanchen Zhang, Wenyi Hong, Ming Ding, Lihang Pan, Xiaotao Gu, Aohan Zeng, Zhengxiao Du, Chan Hee Song, Yu Su, Yuxiao Dong, Jie Tang
    • 🏛️ Institutions: Tsinghua University, MSRA, The Ohio State University
    • 📅 Date: August 12, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [GUI]
    • 🔑 Key: [benchmark], [dataset], [VisualAgentBench], [VAB]
    • 📖 TLDR: The authors introduce VisualAgentBench (VAB), a comprehensive benchmark designed to train and evaluate large multimodal models (LMMs) as visual foundation agents across diverse scenarios, including embodied tasks, graphical user interfaces, and visual design. VAB comprises five distinct environments that systematically challenge LMMs' understanding and interaction capabilities. Additionally, the benchmark offers supervised fine-tuning trajectory data for behavior cloning training, demonstrating the potential to improve open LMMs for serving as visual foundation agents.
  • CoCo-Agent: A Comprehensive Cognitive MLLM Agent for Smartphone GUI Automation

    • Xinbei Ma, Zhuosheng Zhang, Hai Zhao
    • 🏛️ Institutions: SJTU
    • 📅 Date: August 2024
    • 📑 Publisher: ACL 2024
    • 💻 Env: [Mobile]
    • 🔑 Key: [model], [framework], [benchmark]
    • 📖 TLDR: This paper presents CoCo-Agent, a multimodal large language model (MLLM) designed for smartphone GUI automation. It introduces two novel approaches: Comprehensive Environment Perception (CEP) for enhanced GUI understanding, and Conditional Action Prediction (CAP) to improve action response accuracy. The proposed agent achieves state-of-the-art performance on GUI automation benchmarks such as AITW and META-GUI, showcasing its capabilities in realistic scenarios.
  • OfficeBench: Benchmarking Language Agents across Multiple Applications for Office Automation

    • Zilong Wang, Yuedong Cui, Li Zhong, Zimin Zhang, Da Yin, Bill Yuchen Lin, Jingbo Shang
    • 🏛️ Institutions: UCSD, UCLA, AI2
    • 📅 Date: July 26, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Desktop]
    • 🔑 Key: [benchmark], [multi-application], [office automation]
    • 📖 TLDR: OfficeBench introduces a benchmark that evaluates language models' ability to automate office tasks across a range of applications like Word, Excel, and email. The benchmark tests agents’ skills in task-switching, planning, and decision-making by simulating realistic office workflows. Current models, including GPT-4, demonstrate significant gaps in task accuracy and efficiency, revealing areas for improvement in managing complex, multi-application tasks in office environments.
  • Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?

    • Ruisheng Cao, Fangyu Lei, Haoyuan Wu, Jixuan Chen, Yeqiao Fu, Hongcheng Gao, Xinzhuang Xiong, Hanchong Zhang, Yuchen Mao, Wenjing Hu, Tianbao Xie, Hongsheng Xu, Danyang Zhang, Sida Wang, Ruoxi Sun, Pengcheng Yin, Caiming Xiong, Ansong Ni, Qian Liu, Victor Zhong, Lu Chen, Kai Yu, Tao Yu
    • 🏛️ Institutions: HKU, SJTU, Google Cloud AI Research, Google DeepMind, Salesforce Research, Yale University, Sea AI Lab, University of Waterloo
    • 📅 Date: July 15, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Desktop]
    • 🔑 Key: [benchmark], [dataset], [data science], [engineering workflows], [Spider2-V]
    • 📖 TLDR: This paper introduces Spider2-V, a multimodal agent benchmark designed to evaluate the capability of agents in automating professional data science and engineering workflows. It comprises 494 real-world tasks across 20 enterprise-level applications, assessing agents' proficiency in code generation and GUI operations within authentic computer environments.
  • WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks

    • Léo Boisvert, Megh Thakkar, Maxime Gasse, Massimo Caccia, Thibault Le Sellier De Chezelles, Quentin Cappart, Nicolas Chapados, Alexandre Lacoste, Alexandre Drouin
    • 🏛️ Institutions: ServiceNow Research, Mila, Polytechnique Montréal, Université de Montréal
    • 📅 Date: July 7, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [planning], [reasoning], [WorkArena++]
    • 📖 TLDR: This paper introduces WorkArena++, a benchmark comprising 682 tasks that simulate realistic workflows performed by knowledge workers. It evaluates web agents' capabilities in planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding. The study reveals challenges faced by current large language models and vision-language models in serving as effective workplace assistants, providing a resource to advance autonomous agent development. oai_citation_attribution:0‡arXiv
  • CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents

    • Tianqi Xu, Linyao Chen, Dai-Jie Wu, Yanjun Chen, Zecheng Zhang, Xiang Yao, Zhiqiang Xie, Yongchao Chen, Shilong Liu, Bochen Qian, Philip Torr, Bernard Ghanem, Guohao Li
    • 🏛️ Institutions: KAUST, UTokyo, CMU, Stanford, Harvard, Tsinghua University, SUSTech, Oxford
    • 📅 Date: July 3, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [GUI]
    • 🔑 Key: [benchmark], [framework], [evaluation], [CRAB]
    • 📖 TLDR: The authors present CRAB, a benchmark framework designed to evaluate Multimodal Language Model agents across multiple environments. It features a graph-based fine-grained evaluation method and supports automatic task generation, addressing limitations in existing benchmarks.
  • AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents

    • Yuxiang Chai, Siyuan Huang, Yazhe Niu, Han Xiao, Liang Liu, Dingyu Zhang, Peng Gao, Shuai Ren, Hongsheng Li
    • 🏛️ Institutions: CUHK, SJTU, Shanghai AI Lab, vivo AI Lab
    • 📅 Date: July 3, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [dataset], [benchmark], [AMEX]
    • 📖 TLDR: This paper introduces the Android Multi-annotation EXpo (AMEX), a comprehensive dataset designed for training and evaluating mobile GUI-control agents. AMEX comprises over 104K high-resolution screenshots from 110 popular mobile applications, annotated at multiple levels, including GUI interactive element grounding, functionality descriptions, and complex natural language instructions. The dataset aims to advance research on AI agents capable of completing complex tasks by interacting directly with mobile device GUIs.
  • E-ANT: A Large-Scale Dataset for Efficient Automatic GUI NavigaTion

    • Ke Wang, Tianyu Xia, Zhangxuan Gu, Yi Zhao, Shuheng Shen, Changhua Meng, Weiqiang Wang, Ke Xu
    • 🏛️ Institutions: Ant Group, Tsinghua University
    • 📅 Date: June 20, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [dataset], [benchmark], [E-ANT]
    • 📖 TLDR: This paper introduces E-ANT, the first large-scale Chinese GUI navigation dataset comprising over 40,000 real human interaction traces across more than 5,000 tiny apps. The dataset includes high-quality screenshots with annotations, facilitating the evaluation and development of GUI navigation and decision-making capabilities in multimodal large language models (MLLMs). The authors also assess various MLLMs on E-ANT, providing insights into their performance and potential improvements.
  • WebCanvas: Benchmarking Web Agents in Online Environments

    • Yichen Pan, Dehan Kong, Sida Zhou, Cheng Cui, Yifei Leng, Bing Jiang, Hangyu Liu, Yanyi Shang, Shuyan Zhou, Tongshuang Wu, Zhengyang Wu
    • 🏛️ Institutions: iMean AI, CMU
    • 📅 Date: June 18, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [dataset], [benchmark], [Mind2Web-Live], [key-node evaluation]
    • 📖 TLDR: This paper presents WebCanvas, an online evaluation framework for web agents designed to address the dynamic nature of web interactions. It introduces a key-node-based evaluation metric to capture critical actions or states necessary for task completion while disregarding noise from insignificant events or changed web elements. The framework includes the Mind2Web-Live dataset, a refined version of the original Mind2Web static dataset, containing 542 tasks with 2,439 intermediate evaluation states. Despite advancements, the best-performing model achieves a task success rate of 23.1%, highlighting substantial room for improvement.
  • Adversarial Attacks on Multimodal Agents

    • Chen Henry Wu, Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried, Aditi Raghunathan
    • 🏛️ Institutions: CMU
    • 📅 Date: Jun 18, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [safety], [VisualWebArena-Adv]
    • 📖 TLDR: This paper investigates the safety risks posed by multimodal agents built on vision-enabled language models (VLMs). The authors introduce two adversarial attack methods: a captioner attack targeting white-box captioners and a CLIP attack that transfers to proprietary VLMs. To evaluate these attacks, they curated VisualWebArena-Adv, a set of adversarial tasks based on VisualWebArena. The study demonstrates that within a limited perturbation norm, the captioner attack can achieve a 75% success rate in making a captioner-augmented GPT-4V agent execute adversarial goals. The paper also discusses the robustness of agents based on other VLMs and provides insights into factors contributing to attack success and potential defenses. oai_citation_attribution:0‡ArXiv
  • GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents

    • Dongping Chen, Yue Huang, Siyuan Wu, Jingyu Tang, Liuyi Chen, Yilin Bai, Zhigang He, Chenlong Wang, Huichi Zhou, Yiqiang Li, Tianshuo Zhou, Yue Yu, Chujie Gao, Qihui Zhang, Yi Gui, Zhen Li, Yao Wan, Pan Zhou, Jianfeng Gao, Lichao Sun
    • 🏛️ Institutions: Huazhong University of Science and Technology (HUST), MSR, University of Illinois at Chicago (UIC)
    • 📅 Date: June 16, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [GUI]
    • 🔑 Key: [dataset], [benchmark], [GUI-World], [GUI-Vid]
    • 📖 TLDR: This paper introduces GUI-World, a comprehensive dataset designed to evaluate Multimodal Large Language Models (MLLMs) in dynamic and complex GUI environments. It includes over 12,000 annotated GUI interaction videos covering diverse applications and scenarios. The study highlights the limitations of current MLLMs in handling dynamic and multi-step tasks and presents GUI-Vid, a fine-tuned VideoLLM, demonstrating improved understanding of various GUI tasks.
  • MobileAgentBench: An Efficient and User-Friendly Benchmark for Mobile LLM Agents

    • Luyuan Wang, Yongyu Deng, Yiwei Zha, Guodong Mao, Qinmin Wang, Tianchen Min, Wei Chen, Shoufa Chen
    • 🏛️ Institutions: CMU, University of Michigan, Northeastern University, HKU
    • 📅 Date: June 12, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [benchmark], [MobileAgentBench]
    • 📖 TLDR: This paper introduces MobileAgentBench, a benchmark designed to evaluate the performance of large language model-based mobile agents. It defines 100 tasks across 10 open-source apps, categorized by difficulty levels, and assesses existing agents like AppAgent and MobileAgent to facilitate systematic comparisons.
  • WebSuite: Systematically Evaluating Why Web Agents Fail

    • Eric Li, Jim Waldo
    • 🏛️ Institutions: Harvard
    • 📅 Date: June 1, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [framework], [failure analysis], [analysis], [task disaggregation]
    • 📖 TLDR: This paper introduces WebSuite, a diagnostic benchmark to investigate the causes of web agent failures. By categorizing agent tasks using a taxonomy of operational, informational, and navigational actions, WebSuite offers granular insights into the specific actions where agents struggle, like filtering or form completion. It enables detailed comparison across agents, identifying areas for architectural and UX adaptation to improve agent reliability and task success on the web.
  • VideoGUI: A Benchmark for GUI Automation from Instructional Videos

    • Kevin Qinghong Lin, Linjie Li, Difei Gao, Qinchen WU, Mingyi Yan, Zhengyuan Yang, Lijuan Wang, Mike Zheng Shou
    • 🏛️ Institutions: NUS, Microsoft Gen AI
    • 📅 Date: June 2024
    • 📑 Publisher: NeurIPS 2024
    • 💻 Env: [Desktop, Web]
    • 🔑 Key: [benchmark], [instructional videos], [visual planning], [hierarchical task decomposition], [complex software interaction]
    • 📖 TLDR: VideoGUI presents a benchmark for evaluating GUI automation on tasks derived from instructional videos, focusing on visually intensive applications like Adobe Photoshop and video editing software. The benchmark includes 178 tasks, with a hierarchical evaluation method distinguishing high-level planning, mid-level procedural steps, and precise action execution. VideoGUI reveals current model limitations in complex visual tasks, marking a significant step toward improved visual planning in GUI automation.
  • AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents

    • Christopher Rawles, Sarah Clinckemaillie, Yifan Chang, Jonathan Waltz, Gabrielle Lau, Marybeth Fair, Alice Li, William Bishop, Wei Li, Folawiyo Campbell-Ajala, Daniel Toyama, Robert Berry, Divya Tyamagundlu, Timothy Lillicrap, Oriana Riva
    • 🏛️ Institutions: Google DeepMind, Google
    • 📅 Date: May 23, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [benchmark], [Android-based agents], [task diversity], [reinforcement learning], [dynamic environment]
    • 📖 TLDR: AndroidWorld introduces a dynamic Android environment for benchmarking autonomous agents across 116 tasks spanning 20 Android apps. These tasks vary through parameterized and natural language prompts, fostering a realistic testing ground for agents designed to operate in complex mobile environments. The benchmark supports millions of task variations, allowing agents to respond to the Android system's changing states and improving real-world applicability.
  • MMInA: Benchmarking Multihop Multimodal Internet Agents

    • Ziniu Zhang, Shulin Tian, Liangyu Chen, Ziwei Liu
    • 🏛️ Institutions: NTU
    • 📅 Date: April 15, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [framework], [multihop web browsing], [multimodal tasks], [long-range reasoning]
    • 📖 TLDR: The MMInA benchmark is designed to evaluate agents' capacity to complete complex, multihop web tasks by navigating and extracting information across evolving real-world websites. Composed of 1,050 tasks across diverse domains, MMInA challenges agents with realistic, multimodal information retrieval and reasoning tasks, such as comparative shopping and travel inquiries. Despite recent advances, agents show difficulties in handling tasks requiring sequential steps across multiple sites, underscoring the need for enhanced multimodal and memory-augmented models.
  • LlamaTouch: A Faithful and Scalable Testbed for Mobile UI Automation Task Evaluation

    • Li Zhang, Shihe Wang, Xianqing Jia, Zhihan Zheng, Yunhe Yan, Longxi Gao, Yuanchun Li, Mengwei Xu
    • 🏛️ Institutions: BUPT, Tsinghua University
    • 📅 Date: April 12, 2024
    • 📑 Publisher: UIST 2024
    • 💻 Env: [Mobile]
    • 🔑 Key: [framework], [dataset], [benchmark], [UI automation], [mobile agent evaluation]
    • 📖 TLDR: LlamaTouch is an evaluation testbed designed for mobile UI automation, enabling reliable task assessment across 495 annotated tasks. It provides a scalable solution to evaluate agents in real-world mobile settings, comparing agent actions to essential UI states for accurate task completion. LlamaTouch supports dynamic environments, advancing mobile agent reliability and scalability in task automation.
  • OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments

    • Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, Yitao Liu, Yiheng Xu, Shuyan Zhou, Silvio Savarese, Caiming Xiong, Victor Zhong, Tao Yu
    • 🏛️ Institutions: HKU, CMU, Salesforce, University of Waterloo
    • 📅 Date: April 11, 2024
    • 📑 Publisher: NeurIPS 2024
    • 💻 Env: [GUI]
    • 🔑 Key: [benchmark], [real computer tasks], [online environment], [online benchmark]
    • 📖 TLDR: OSWorld introduces a groundbreaking benchmark for multimodal agents to perform open-ended tasks within real computer environments across platforms like Ubuntu, Windows, and macOS. It includes 369 real-world tasks involving web and desktop apps, file management, and multi-app workflows, with custom evaluation scripts for reproducibility. The results reveal current agents’ limitations in GUI interaction and operational knowledge, as they achieve just 12.24% task success compared to humans' 72.36%, highlighting critical gaps for future model improvement.
  • Autonomous Evaluation and Refinement of Digital Agents

    • Jiayi Pan, Yichi Zhang, Nicholas Tomlin, Yifei Zhou, Sergey Levine, Alane Suhr
    • 🏛️ Institutions: UCB, UMich
    • 📅 Date: April 9, 2024
    • 📑 Publisher: COLM 2024
    • 💻 Env: [GUI]
    • 🔑 Key: [framework], [benchmark], [evaluation model], [domain transfer]
    • 📖 TLDR: This paper presents an autonomous evaluation framework for digital agents to enhance performance on web navigation and device control. The study introduces modular, cost-effective evaluators achieving up to 92.9% accuracy in benchmarks like WebArena and outlines their use in fine-tuning agents, improving state-of-the-art by 29% without additional supervision.
  • VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?

    • Junpeng Liu, Yifan Song, Bill Yuchen Lin, Wai Lam, Graham Neubig, Yuanzhi Li, Xiang Yue
    • 🏛️ Institutions: CMU
    • 📅 Date: April 9, 2024
    • 📑 Publisher: COLM 2024
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [dataset], [web page understanding], [grounding]
    • 📖 TLDR: VisualWebBench introduces a comprehensive benchmark for evaluating multimodal large language models (MLLMs) on web-based tasks. It includes 1.5K human-curated instances across 139 websites in 87 sub-domains. The benchmark spans seven tasks—such as OCR, grounding, and web-based QA—aiming to test MLLMs' capabilities in fine-grained web page understanding. Results reveal significant performance gaps, particularly in grounding tasks, highlighting the need for advancement in MLLM web understanding.
  • Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs

    • Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, Zhe Gan
    • 🏛️ Institutions: Apple
    • 📅 Date: April 8, 2024
    • 📑 Publisher: ECCV 2024
    • 💻 Env: [Mobile]
    • 🔑 Key: [model], [framework], [dataset], [benchmark], [mobile UI understanding]
    • 📖 TLDR: This paper presents Ferret-UI, a multimodal large language model (MLLM) designed to understand and interact with mobile user interfaces. The model incorporates advanced capabilities for referring, grounding, and reasoning about UI elements. By training on a variety of UI tasks, Ferret-UI achieves high performance in tasks such as icon recognition and text extraction. The authors introduce a unique architecture that allows for improved visual feature extraction from mobile screens, paving the way for applications in accessibility and user interaction.
  • Enhancing Mobile "How-to" Queries with Automated Search Results Verification and Reranking

    • Lei Ding, Jeshwanth Bheemanpally, Yi Zhang
    • 🏛️ Institutions: UCSC
    • 📅 Date: April 2024
    • 📑 Publisher: SIGIR 2024
    • 💻 Env: [Mobile]
    • 🔑 Key: [framework], [benchmark], [reranking], [verification], [mobile task automation]
    • 📖 TLDR: This paper presents a system that enhances mobile "how-to" queries by verifying and reranking search results through automated instruction extraction, on-device action execution, and reranking based on relevance. The method improves on traditional ranking by analyzing device-specific execution success. The approach comprises a three-stage pipeline: 1) extracting step-by-step instructions from top search results, 2) validating these instructions on mobile devices, and 3) reranking based on performance. The system leverages a pre-trained GPT model for initial processing, ensuring adaptability across diverse apps and systems.
  • Benchmarking Mobile Device Control Agents across Diverse Configurations

    • Juyong Lee, Taywon Min, Minyong An, Dongyoon Hahm, Haeone Lee, Changyeon Kim, Kimin Lee
    • 🏛️ Institutions: KAIST, Seoul National University, Yonsei University
    • 📅 Date: April 2024
    • 📑 Publisher: ICLR 2024
    • 💻 Env: [Mobile]
    • 🔑 Key: [benchmark], [dataset], [mobile device control], [agent performance]
    • 📖 TLDR: This paper presents B-MoCA, a comprehensive benchmark for evaluating mobile device control agents using an Android-based testbed with 131 tasks and various device configurations. The benchmark assesses agents' abilities across tasks that include device-specific variations, navigation, and human-like dual-gesture interactions. B-MoCA highlights that current agents perform well on basic tasks but struggle with complex configurations, pointing to opportunities for future improvements in mobile automation capabilities.
  • AgentStudio: A Toolkit for Building General Virtual Agents

    • Longtao Zheng, Zhiyuan Huang, Zhenghai Xue, Xinrun Wang, Bo An, Shuicheng Yan
    • 🏛️ Institutions: NTU, Skywork AI, ETH Zurich
    • 📅 Date: March 26, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Desktop]
    • 🔑 Key: [framework], [dataset], [general virtual agents], [open-ended learning], [tool creation], [GroundUI], [benchmark]
    • 📖 TLDR: AgentStudio is a robust toolkit for developing virtual agents with versatile actions, such as GUI automation and code execution. It unifies real-world human-computer interactions across OS platforms and includes diverse observation and action spaces, facilitating comprehensive training and benchmarking in complex settings. The toolkit's flexibility promotes agent generalization across varied tasks, supporting tool creation and a multimodal interaction interface to advance agent adaptability and learning.
  • Tur[k]ingBench: A Challenge Benchmark for Web Agents

    • Kevin Xu, Yeganeh Kordi, Kate Sanders, Yizhong Wang, Adam Byerly, Jingyu Zhang, Benjamin Van Durme, Daniel Khashabi
    • 🏛️ Institutions: JHU, Brown, UW
    • 📅 Date: March 18, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [dataset], [multi-modal reasoning], [TurkingBench], [Turking]
    • 📖 TLDR: This paper introduces Tur[k]ingBench, a benchmark comprising 158 web-grounded tasks designed to evaluate AI agents' capabilities in complex web-based environments. Unlike prior benchmarks that utilize synthetic web pages, Tur[k]ingBench leverages natural HTML pages from crowdsourcing platforms, presenting tasks with rich multi-modal contexts. The benchmark includes 32.2K instances, each with diverse inputs, challenging models to interpret and interact with web pages effectively. Evaluations of state-of-the-art models reveal significant room for improvement, highlighting the need for advanced web-based agents capable of handling real-world web interactions.
  • WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks?

    • Alexandre Drouin, Maxime Gasse, Massimo Caccia, Issam H. Laradji, Manuel Del Verme, Tom Marty, Léo Boisvert, Megh Thakkar, Quentin Cappart, David Vazquez, Nicolas Chapados, Alexandre Lacoste
    • 🏛️ Institutions: ServiceNow Research, Mila, Polytechnique Montreal, McGill University, University de Montreal
    • 📅 Date: March 11, 2024
    • 📑 Publisher: ICML 2024
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [enterprise task automation], [ServiceNow], [knowledge work automation]
    • 📖 TLDR: WorkArena introduces a robust benchmark hosted on the ServiceNow platform to assess the effectiveness of large language model-based agents in performing 33 knowledge tasks common to enterprise environments. Leveraging BrowserGym, an environment that simulates complex browser interactions, WorkArena provides web agents with realistic challenges like data entry, form completion, and information retrieval in knowledge bases. Despite promising initial results, open-source models show a 42.7% success rate compared to closed-source counterparts, underlining the current gap in task automation for enterprise applications and highlighting key areas for improvement.
  • On the Multi-turn Instruction Following for Conversational Web Agents

    • Yang Deng, Xuan Zhang, Wenxuan Zhang, Yifei Yuan, See-Kiong Ng, Tat-Seng Chua
    • 🏛️ Institutions: NUS, DAMO Academy, University of Copenhagen
    • 📅 Date: February 23, 2024
    • 📑 Publisher: ACL 2024
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [dataset], [multi-turn dialogue], [memory utilization], [self-reflective planning]
    • 📖 TLDR: This paper explores multi-turn conversational web navigation, introducing the MT-Mind2Web dataset to support instruction-following tasks for web agents. The proposed Self-MAP (Self-Reflective Memory-Augmented Planning) framework enhances agent performance by integrating memory with self-reflection for sequential decision-making in complex interactions. Extensive evaluations using MT-Mind2Web demonstrate Self-MAP's efficacy in addressing the limitations of current models in multi-turn interactions, providing a novel dataset and framework for evaluating and training agents on detailed, multi-step web-based tasks.
  • WebLINX: Real-World Website Navigation with Multi-Turn Dialogue

    • Xing Han Lu, Zdeněk Kasner, Siva Reddy
    • 🏛️ Institutions: Mila, McGill University
    • 📅 Date: February 2024
    • 📑 Publisher: ICML 2024
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [dataset], [benchmark], [multi-turn dialogue], [real-world navigation], [WebLINX]
    • 📖 TLDR: WebLINX addresses the complexity of real-world website navigation for conversational agents, with a benchmark featuring over 2,300 demonstrations across 150+ websites. The benchmark allows agents to handle multi-turn instructions and interact dynamically across diverse domains, including geographic and thematic categories. The study proposes a retrieval-inspired model that selectively extracts key HTML elements and browser actions, achieving efficient task-specific representations. Experiments reveal that smaller finetuned decoders outperform larger zero-shot multimodal models, though generalization to new environments remains challenging.
  • OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web

    • Raghav Kapoor, Yash Parag Butala, Melisa Russak, Jing Yu Koh, Kiran Kamble, Waseem Alshikh, Ruslan Salakhutdinov
    • 🏛️ Institutions: CMU
    • 📅 Date: February 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Desktop]
    • 🔑 Key: [dataset], [benchmark]
    • 📖 TLDR: OmniACT introduces a dataset and benchmark to train and evaluate multimodal agents capable of autonomously performing diverse tasks across desktop and web environments. Using annotated UI elements across applications, it combines visual grounding with natural language instructions, providing 9,802 data points for developing agents that integrate high-level reasoning with UI interactions. The study highlights the limited proficiency of current models, with baselines like GPT-4 only achieving 15% of human performance on executable scripts, emphasizing OmniACT's potential as a testbed for advancing multimodal AI.
  • Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception

    • Junyang Wang, Haiyang Xu, Jiabo Ye, Ming Yan, Weizhou Shen, Ji Zhang, Fei Huang, Jitao Sang
    • 🏛️ Institutions: Beijing Jiaotong University, Alibaba
    • 📅 Date: January 29, 2024
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [framework], [benchmark]
    • 📖 TLDR: This paper presents Mobile-Agent, an autonomous multi-modal agent designed for mobile device interaction. The system integrates visual perception, natural language processing, and action prediction to navigate and operate mobile applications. The authors introduce a new dataset and benchmark for evaluating mobile agents, demonstrating Mobile-Agent's superior performance in task completion and generalization across various apps compared to existing methods.
  • VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks

    • Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Chong Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Ruslan Salakhutdinov, Daniel Fried
    • 🏛️ Institutions: CMU
    • 📅 Date: January 24, 2024
    • 📑 Publisher: ACL 2024
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [benchmark], [dataset], [multimodal agent evaluation], [visually grounded tasks]
    • 📖 TLDR: VisualWebArena is a benchmark designed for testing multimodal web agents on complex, visually grounded web tasks. It provides a reproducible framework with 910 task scenarios across real-world web applications, emphasizing open-ended, visually guided interactions. The tasks are modeled within a partially observable Markov decision process to assess agents’ capacity to interpret multimodal inputs, execute navigation, and accomplish user-defined objectives across complex visual and textual information on websites.
  • WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models

    • Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong Lan, Dong Yu
    • 🏛️ Institutions: Zhejiang University, Tencent AI Lab, Westlake University
    • 📅 Date: January 24, 2024
    • 📑 Publisher: ACL 2024
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [evaluation]
    • 📖 TLDR: This paper introduces WebVoyager, an innovative web agent powered by Large Multimodal Models (LMMs) that can complete user instructions end-to-end by interacting with real-world websites. The authors establish a new benchmark with tasks from 15 popular websites and propose an automatic evaluation protocol using GPT-4V. WebVoyager achieves a 59.1% task success rate, significantly outperforming GPT-4 (All Tools) and text-only setups. The study demonstrates the effectiveness of multimodal approaches in web automation and provides insights into developing more intelligent web interaction solutions.
  • SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents

    • Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu
    • 🏛️ Institutions: Nanjing University, Shanghai AI Lab
    • 📅 Date: January 19, 2024
    • 📑 Publisher: ACL 2024
    • 💻 Env: [GUI]
    • 🔑 Key: [model], [benchmark], [GUI grounding], [visual grounding]
    • 📖 TLDR: TBD.
  • AgentBench: Evaluating LLMs as Agents

    • Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
    • 🏛️ Institutions: THU, OSU, ByteDance
    • 📅 Date: January 1, 2024
    • 📑 Publisher: ICLR 2024
    • 💻 Env: [GUI], [General]
    • 🔑 Key: [benchmark], [evaluation]
    • 📖 TLDR: AgentBench provides a comprehensive benchmark for evaluating LLMs as autonomous agents in various environments. It includes eight distinct scenarios, testing the LLMs' reasoning and decision-making capabilities in tasks such as OS interaction, database querying, knowledge graph traversal, and more. This benchmark compares the effectiveness of multiple commercial and open-source LLMs, revealing areas of improvement in instruction-following and long-term reasoning, essential for practical agent development.
  • GPT-4V(ision) is a Generalist Web Agent, if Grounded

    • Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su
    • 🏛️ Institutions: OSU
    • 📅 Date: January 1, 2024
    • 📑 Publisher: ICML 2024
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [dataset], [benchmark], [grounding], [SeeAct], [Multimodal-Mind2web]
    • 📖 TLDR: This paper explores the capability of GPT-4V(ision), a multimodal model, as a web agent that can perform tasks across various websites by following natural language instructions. It introduces the SEEACT framework, enabling GPT-4V to navigate, interpret, and interact with elements on websites. Evaluated using the Mind2Web benchmark and an online test environment, the framework demonstrates high performance on complex web tasks by integrating grounding strategies like element attributes and image annotations to improve HTML element targeting. However, grounding remains challenging, presenting opportunities for further improvement.
  • Multimodal Web Navigation with Instruction-Finetuned Foundation Models

    • Hiroki Furuta, Kuang-Huei Lee, Ofir Nachum, Yutaka Matsuo, Aleksandra Faust, Shixiang Shane Gu, Izzeddin Gur
    • 🏛️ Institutions: Univ. of Tokyo, Google DeepMind
    • 📅 Date: Jan 1, 2024
    • 📑 Publisher: ICLR 2024
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [model], [dataset], [web navigation], [instruction-following], [WebShop]
    • 📖 TLDR: This paper introduces WebGUM, an instruction-following multimodal agent for autonomous web navigation that leverages both visual (webpage screenshots) and textual (HTML) inputs to perform actions such as click and type. The model is trained on a vast corpus of demonstrations and shows improved capabilities in visual perception, HTML comprehension, and multi-step decision-making, achieving state-of-the-art performance on benchmarks like MiniWoB and WebShop. WebGUM provides a scalable approach to web-based tasks without task-specific architectures, enabling high-performance web navigation with generalizable, multimodal foundation models.
  • AssistGUI: Task-Oriented Desktop Graphical User Interface Automation

    • Difei Gao, Lei Ji, Zechen Bai, Mingyu Ouyang, Peiran Li, Dongxing Mao, Qinchen Wu, Weichen Zhang, Peiyi Wang, Xiangwu Guo, Hengxu Wang, Luowei Zhou, Mike Zheng Shou
    • 🏛️ Institutions: NUS
    • 📅 Date: December 20, 2023
    • 📑 Publisher: CVPR 2024
    • 💻 Env: [Desktop]
    • 🔑 Key: [framework], [dataset], [benchmark], [desktop productivity tasks]
    • 📖 TLDR: This study presents AssistGUI, a benchmark and framework for desktop GUI automation, featuring an LLM-based agent capable of completing complex user requests by analyzing instructional videos and performing actions on the desktop. Utilizing a novel Actor-Critic framework and GUI parser, AssistGUI was tested on 100 tasks across nine applications, such as MS Word and After Effects. Despite advances, the top-performing model achieved only a 46% success rate, illustrating the challenge of comprehensive desktop automation and underscoring areas for future research in agent-driven GUI tasks.
  • CogAgent: A Visual Language Model for GUI Agents

    • Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng Xu, Wenmeng Yu, Junhao Chen, Yuxuan Wang, Yining Ye, Jiayi Zhang, Hao Dong, Wenhu Chen, Yizhou Wang, Kai-Wei Chang
    • 🏛️ Institutions: Tsinghua University, Zhipu AI
    • 📅 Date: December 15, 2023
    • 📑 Publisher: CVPR 2024
    • 💻 Env: [GUI]
    • 🔑 Key: [model], [dataset], [benchmark], [visual language model], [GUI agent]
    • 📖 TLDR: This paper presents CogAgent, a visual language model designed for GUI agents. The authors introduce a new dataset, CogBench, featuring 1,430 GUI tasks across various applications. CogAgent employs a novel training approach combining supervised fine-tuning and decision-making fine-tuning. The model demonstrates superior performance on CogBench and generalizes well to unseen applications, outperforming existing models like GPT-4V in GUI task completion.
  • GAIA: a benchmark for General AI Assistants

    • Grégoire Mialon, Yassine Nakkach, Aslan Tchamkerten, Albert Thomas, Laurent Dinh, and a research team from Meta AI and Hugging Face.
    • 🏛️ Institutions: Meta AI, Hugging Face
    • 📅 Date: November 21, 2023
    • 📑 Publisher: arXiv
    • 💻 Env: [Misc]
    • 🔑 Key: [benchmark], [multi-modality], [tool use], [reasoning]
    • 📖 TLDR: GAIA is a benchmark developed for evaluating general-purpose AI assistants. It aims to test assistant models across multiple modalities and complex reasoning tasks in real-world settings, including scenarios that require tool usage and open-ended question answering. With a dataset comprising 466 questions across various domains, GAIA highlights gaps between current AI performance and human capability, presenting a significant challenge for large language models such as GPT-4.
  • GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation

    • An Yan, Zhengyuan Yang, Wanrong Zhu, Kevin Lin, Linjie Li, Jianfeng Wang, Jianwei Yang, Yiwu Zhong, Julian McAuley, Jianfeng Gao, Zicheng Liu, Lijuan Wang
    • 🏛️ Institutions: UCSD, Microsoft, UCSB, UWM
    • 📅 Date: November 13, 2023
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [framework], [benchmark], [zero-shot GUI navigation], [multimodal LLMs]
    • 📖 TLDR: This paper explores the capabilities of GPT-4V in navigating smartphone GUIs without prior training. The authors introduce a novel framework for GUI navigation and a new benchmark, MobileNav, featuring 1,000 navigation tasks across 100 mobile apps. The study demonstrates GPT-4V's impressive zero-shot performance in understanding and interacting with mobile interfaces, outperforming previous methods and even approaching human-level performance on some tasks.
  • Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V

    • Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, Jianfeng Gao
    • 🏛️ Institutions: MSR
    • 📅 Date: October 17, 2023
    • 📑 Publisher: arXiv
    • 💻 Env: [Misc]
    • 🔑 Key: [visual prompting], [framework], [benchmark], [visual grounding], [zero-shot]
    • 📖 TLDR: This paper introduces Set-of-Mark (SoM), a novel visual prompting approach designed to enhance the visual grounding capabilities of multimodal models like GPT-4V. By overlaying images with spatially and semantically distinct marks, SoM enables fine-grained object recognition and interaction within visual data, surpassing conventional zero-shot segmentation methods in accuracy. The framework is validated on tasks requiring detailed spatial reasoning, demonstrating a significant improvement over existing visual-language models without fine-tuning.
  • You Only Look at Screens: Multimodal Chain-of-Action Agents

    • Zhuosheng Zhang, Aston Zhang
    • 🏛️ Institutions: SJTU
    • 📅 Date: September 20, 2023
    • 📑 Publisher: ICLR 2024
    • 💻 Env: [GUI]
    • 🔑 Key: [framework], [dataset], [benchmark], [multimodal agent], [chain-of-action technique]
    • 📖 TLDR: This paper presents Auto-GUI, a multimodal agent capable of directly interacting with graphical user interfaces without relying on environment parsing or application-specific APIs. The authors introduce a novel chain-of-action technique that leverages previous action histories and future action plans to improve decision-making. Auto-GUI is evaluated on a new device-control benchmark, AITW, demonstrating state-of-the-art performance in action prediction and task completion across various applications and web-based tasks.
  • AutoDroid: LLM-powered Task Automation in Android

    • Hao Wen, Yuanchun Li, Guohong Liu, Shanhui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang, Yunhao Liu, Yaqin Zhang, Yunxin Liu
    • 🏛️ Institutions: Tsinghua University, Shanghai AI Lab, University of Notre Dame, MSR
    • 📅 Date: August 29, 2023
    • 📑 Publisher: MobiCom 2024
    • 💻 Env: [Mobile]
    • 🔑 Key: [framework], [dataset], [benchmark], [Android task automation], [LLM-powered agent]
    • 📖 TLDR: This paper introduces AutoDroid, a novel mobile task automation system capable of handling arbitrary tasks on any Android application without manual efforts. The framework combines the commonsense knowledge of LLMs with domain-specific knowledge of apps through automated dynamic analysis. AutoDroid features a functionality-aware UI representation method, exploration-based memory injection techniques, and a multi-granularity query optimization module. Evaluated on a new benchmark with 158 common tasks, AutoDroid achieves a 90.9% action generation accuracy and a 71.3% task completion rate, significantly outperforming GPT-4-powered baselines.
  • WebArena: A Realistic Web Environment for Building Autonomous Agents

    • Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
    • 🏛️ Institutions: CMU
    • 📅 Date: July 26, 2023
    • 📑 Publisher: NeurIPS 2023
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [benchmark], [multi-tab navigation], [web-based interaction], [agent simulation]
    • 📖 TLDR: WebArena provides a standalone, realistic web simulation environment where autonomous agents can perform complex web-based tasks. The platform offers functionalities such as multi-tab browsing, element interaction, and customized user profiles. Its benchmark suite contains 812 tasks grounded in high-level natural language commands. WebArena uses multi-modal observations, including HTML and accessibility tree views, supporting advanced tasks that require contextual understanding across diverse web pages, making it suitable for evaluating generalist agents in real-world web environments.
  • Android in the Wild: A Large-Scale Dataset for Android Device Control

    • Christopher Rawles, Alice Li, Daniel Rodriguez, Oriana Riva, Timothy Lillicrap
    • 🏛️ Institutions: Google Research, Google DeepMind
    • 📅 Date: July 19, 2023
    • 📑 Publisher: NeurIPS 2023
    • 💻 Env: [Mobile]
    • 🔑 Key: [dataset], [benchmark], [device control], [natural language interaction], [gesture-based actions]
    • 📖 TLDR: The Android in the Wild (AitW) dataset introduces a significant benchmark for Android device control, encompassing over 715,000 human-labeled episodes with natural language commands and corresponding UI actions. Collected from Android devices across versions 10-13, it captures complex multi-step tasks requiring both visual and contextual understanding. The dataset is structured to test the robustness of device-control systems under varying conditions, such as new tasks or applications, and includes data to evaluate gesture-based interactions, providing a unique foundation for mobile interface automation and task execution research.
  • Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer Control

    • Longtao Zheng, Rundong Wang, Xinrun Wang, Bo An
    • 🏛️ Institutions: NTU
    • 📅 Date: June 13, 2023
    • 📑 Publisher: ICLR 2024
    • 💻 Env: [Desktop]
    • 🔑 Key: [framework], [benchmark], [trajectory prompting], [state abstraction], [memory retrieval]
    • 📖 TLDR: Synapse introduces a novel framework for computer control tasks, leveraging trajectory-as-exemplar prompting and memory to enhance LLM performance in complex, multi-step computer tasks. The system combines state abstraction, trajectory-based prompts, and memory retrieval, overcoming LLM limitations by filtering task-irrelevant data, storing exemplar trajectories, and retrieving relevant instances for improved decision-making. Synapse achieves significant performance gains on benchmarks such as MiniWoB++ and Mind2Web, demonstrating enhanced task success rates and generalization across diverse web-based tasks.
  • Mind2Web: Towards a Generalist Agent for the Web

    • Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Sam Stevens, Boshi Wang, Huan Sun, Yu Su
    • 🏛️ Institutions: OSU
    • 📅 Date: June 9, 2023
    • 📑 Publisher: NeurIPS 2023
    • 💻 Env: [Web]
    • 🔑 Key: [dataset], [benchmark], [model], [Mind2Web], [MindAct]
    • 📖 TLDR: Mind2Web presents a dataset and benchmark specifically crafted for generalist web agents capable of performing language-guided tasks across varied websites. Featuring over 2,000 tasks from 137 sites, it spans 31 domains and emphasizes open-ended, realistic tasks in authentic, unsimplified web settings. The study proposes the MindAct framework, which optimizes LLMs for handling complex HTML elements by using small LMs to rank elements before full processing, thereby enhancing the efficiency and versatility of web agents in diverse contexts.
  • Mobile-Env: Building Qualified Evaluation Benchmarks for LLM-GUI Interaction

    • Danyang Zhang, Zhennan Shen, Rui Xie, Situo Zhang, Tianbao Xie, Zihan Zhao, Siyuan Chen, Lu Chen, Hongshen Xu, Ruisheng Cao, Kai Yu
    • 🏛️ Institutions: SJTU, HKU
    • 📅 Date: May 14, 2023
    • 📑 Publisher: arXiv
    • 💻 Env: [Mobile]
    • 🔑 Key: [benchmark], [dataset], [interaction platform], [multistep interaction], [InfoUI]
    • 📖 TLDR: This paper introduces Mobile-Env, a novel interaction platform and benchmark aimed at assessing large language models' (LLMs) capabilities in interactive environments. It builds on the InfoUI task set, derived from WikiHow, to create structured text-based challenges that simulate real-world mobile interactions. The platform is designed to support task expansions from the community, aiming to drive advancements in LLM-based interactive agents.
  • Language Models can Solve Computer Tasks

    • Geunwoo Kim, Pierre Baldi, Stephen McAleer
    • 🏛️ Institutions: UCI
    • 📅 Date: March 30, 2023
    • 📑 Publisher: NeurIPS 2023
    • 💻 Env: [Desktop]
    • 🔑 Key: [framework], [benchmark], [Recursive Critique and Improve], [RCI], [MiniWoB++], [general computer tasks]
    • 📖 TLDR: This study demonstrates that large language models (LLMs) can effectively automate computer tasks using a Recursive Critique and Improve (RCI) prompting method, enabling agents to handle complex desktop tasks like email and file management. By combining RCI with existing Chain of Thought (CoT) prompting, the method outperforms prior LLM approaches and traditional supervised and reinforcement learning models on the MiniWoB++ benchmark, showing potential for broad computer task automation.
  • WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents

    • Shunyu Yao, Howard Chen, John Yang, Karthik Narasimhan
    • 🏛️ Institutions: Princeton University
    • 📅 Date: July 2022
    • 📑 Publisher: NeurIPS 2022
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [dataset], [benchmark], [e-commerce web interaction], [language grounding]
    • 📖 TLDR: This paper introduces WebShop, a simulated web-based shopping environment with over 1 million real-world products and 12,087 annotated instructions. It allows language agents to navigate, search, and make purchases based on natural language commands. The study explores how agents handle compositional instructions and noisy web data, providing a robust environment for reinforcement learning and imitation learning. The best models show effective sim-to-real transfer on websites like Amazon, illustrating WebShop’s potential for training grounded agents.
  • META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI

    • Liangtai Sun, Xingyu Chen, Lu Chen, Tianle Dai, Zichen Zhu, Kai Yu
    • 🏛️ Institutions: SJTU
    • 📅 Date: May 23, 2022
    • 📑 Publisher: EMNLP 2022
    • 💻 Env: [Mobile]
    • 🔑 Key: [benchmark], [dataset], [task-oriented dialogue], [GUI-based interaction], [multi-modal agent]
    • 📖 TLDR: This paper presents META-GUI, a dataset and framework for training multi-modal conversational agents capable of interacting directly with mobile app interfaces without the need for backend APIs. META-GUI includes over 1,100 dialogues with annotated action sequences on various tasks such as booking and scheduling. The authors propose a GUI-based task-oriented dialogue system that allows agents to navigate mobile interfaces via direct GUI actions, with performance shown to improve in multi-modal task-oriented dialogue contexts.
  • Grounding Open-Domain Instructions to Automate Web Support Tasks

    • Nancy Xu, Sam Masling, Michael Du, Giovanni Campagna, Larry Heck, James Landay, Monica Lam
    • 🏛️ Institutions: Stanford
    • 📅 Date: March 30, 2021
    • 📑 Publisher: NAACL 2021
    • 💻 Env: [Web]
    • 🔑 Key: [benchmark], [framework], [grounding], [task automation], [open-domain instructions], [RUSS]
    • 📖 TLDR: This paper introduces RUSS (Rapid Universal Support Service), a framework designed to interpret and execute open-domain, step-by-step web instructions automatically. RUSS uses a BERT-LSTM model for semantic parsing into a custom language, ThingTalk, which allows the system to map language to actions across various web elements. The framework, including a dataset of instructions, facilitates agent-based web support task automation by grounding natural language to interactive commands.
  • Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements

    • Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, Zhiwei Guan
    • 🏛️ Institutions: Google Research
    • 📅 Date: November 2020
    • 📑 Publisher: EMNLP 2020
    • 💻 Env: [Mobile]
    • 🔑 Key: [dataset], [benchmark], [model], [accessibility], [natural language generation], [WidgetCaption]
    • 📖 TLDR: This paper introduces the task of widget captioning, which aims to automatically generate natural language descriptions for UI elements in mobile apps to enhance accessibility. Using both visual and structural data from UI components, the study presents a novel dataset of 162,859 captions across 61,285 UI elements. Multiple deep learning models were tested on this dataset, with findings suggesting the potential for improving screen reader usability for visually impaired users by generating descriptive captions of UI elements.
  • Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration

    • Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy Liang
    • 🏛️ Institutions: Stanford
    • 📅 Date: February 24, 2018
    • 📑 Publisher: ICLR 2018
    • 💻 Env: [Web]
    • 🔑 Key: [framework], [benchmark], [reinforcement learning], [web tasks], [workflow-guided exploration]
    • 📖 TLDR: This paper presents a novel RL approach using workflow-guided exploration to efficiently train agents on web-based tasks, where actions are restricted based on demonstrated workflows to streamline learning. Evaluated on MiniWoB and MiniWoB++ benchmarks, the method significantly outperforms traditional RL techniques in sparse reward settings by structuring exploration according to high-level action constraints.