Skip to content

Latest commit

 

History

History
57 lines (43 loc) · 2.41 KB

README.md

File metadata and controls

57 lines (43 loc) · 2.41 KB

OVDEval

A Comprehensive Evaluation Benchmark for Open-Vocabulary Detection

[Paper 📄] [Dataset 🗂️]


OVDEval is a new benchmark for OVD model, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called Non-Maximum Suppression Average Precision (NMS-AP) to address this issue.

Check out Our AAAI24 paper [How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection] for more details about the Inflated AP Problem and NMS-AP.

knowledge

OVDEval


Dataset Statistics

benchmark


Benchmark

radar benchmark


How To Download

See Our hugging face page for downloading OVDEval.


Evaluate With NMS-AP

OVDEval should be evaluated using NMS-AP to avoid the inflated AP problem. Please follow the evaluation instructions.

The "output" folder provides the final output JSON files obtained by applying NMS to the inference results of the GLIP model on the material test dataset.


Citations

Please consider citing our papers if you use the dataset:

@article{yao2023evaluate,
  title={How to Evaluate the Generalization of Detection? A Benchmark for Comprehensive Open-Vocabulary Detection},
  author={Yao, Yiyang and Liu, Peng and Zhao, Tiancheng and Zhang, Qianqian and Liao, Jiajia and Fang, Chunxin and Lee, Kyusong and Wang, Qing},
  journal={arXiv preprint arXiv:2308.13177},
  year={2023}
}