AGENTISSUE-BENCH is the first reproducible issue resolution benchmark focused on real-world agent system issues. It is designed to evaluate the efficacy of state-of-the-art software engineering (SE) agents in resolving these issues.
- 2025-05: Initial benchmark release
Through a multi-step filtering process—including failure reproduction, patch reproduction, and non-flakiness verification—we collect 50 reproducible agents issues, which form AGENTISSUE-BENCH.
Each issue is containerized as a Docker image and hosted on Docker Hub: 🔗 Docker Hub Repository
To retrieve the images for all issues, run:
$ python pull_images.py
To pull a specific image by tag, use:
$ python pull_images.py --tag <tag>
To remove all pulled Docker images and containers, run:
$ python remove_images.py
To remove a specific image and container by tag:
$ python remove_images.py --tag <tag>
To test the issues in AGENTISSUE-BENCH:
$ python test_agentissue_bench.py
The following figure shows the distribution of AgentIssue-Bench:

The following figure shows the resolution rate of AgentIssue-Bench v.s. traditional software issues:

The following table presents the overall results of SE agents on AgentIssue-Bench:

To evaluate generated patches in AGENTISSUE-BENCH:
- Create a directory named
Patches:
mkdir Patches
- Place your patch files inside subdirectories named by tag:
Patches/{tag_name}/your_patch_files.patch
- Run the evaluation script:
python eval_patches.py
- You can see the result in
patch_eval.log
The Generated Patches directory contains all patches generated by our evaluation of different SE agents and Large Language Models (LLMs). The patches are organized as follows:
Generated Patches/
├── swe-agent/ # Patches generated by SWE-agent
├── Agentless/ # Patches generated by Agentless
└── Auto-code-rover/ # Patches generated by Auto-code-rover
Each agent directory contains patches generated using two state-of-the-art LLMs:
- claude-3-5-sonnet-20241022
- gpt-4o