Skip to content

alfin06/AgentIssue-Bench

Repository files navigation

AGENTISSUE-BENCH

Paper   Leaderboard

AGENTISSUE-BENCH is the first reproducible issue resolution benchmark focused on real-world agent system issues. It is designed to evaluate the efficacy of state-of-the-art software engineering (SE) agents in resolving these issues.

🗓️ Updates

  • 2025-05: Initial benchmark release

📚 Benchmark Dataset

Through a multi-step filtering process—including failure reproduction, patch reproduction, and non-flakiness verification—we collect 50 reproducible agents issues, which form AGENTISSUE-BENCH.

Each issue is containerized as a Docker image and hosted on Docker Hub: 🔗 Docker Hub Repository

To retrieve the images for all issues, run:

$ python pull_images.py

To pull a specific image by tag, use:

$ python pull_images.py --tag <tag>

To remove all pulled Docker images and containers, run:

$ python remove_images.py

To remove a specific image and container by tag:

$ python remove_images.py --tag <tag>

To test the issues in AGENTISSUE-BENCH:

$ python test_agentissue_bench.py

📊 Results

Overall Resoultion Rate

The following figure shows the distribution of AgentIssue-Bench: pie

The following figure shows the resolution rate of AgentIssue-Bench v.s. traditional software issues: bar

The following table presents the overall results of SE agents on AgentIssue-Bench: table_results

🧪 Patch Evaluation

To evaluate generated patches in AGENTISSUE-BENCH:

  1. Create a directory named Patches:
mkdir Patches
  1. Place your patch files inside subdirectories named by tag:
Patches/{tag_name}/your_patch_files.patch
  1. Run the evaluation script:
python eval_patches.py
  1. You can see the result in patch_eval.log

📁 Generated Patches

The Generated Patches directory contains all patches generated by our evaluation of different SE agents and Large Language Models (LLMs). The patches are organized as follows:

Generated Patches/
├── swe-agent/         # Patches generated by SWE-agent
├── Agentless/         # Patches generated by Agentless
└── Auto-code-rover/   # Patches generated by Auto-code-rover

Each agent directory contains patches generated using two state-of-the-art LLMs:

  • claude-3-5-sonnet-20241022
  • gpt-4o

About

Benchmark for issue resolutions in agent systems.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •