Skip to content

Latest commit

 

History

History
85 lines (61 loc) · 3.88 KB

README.md

File metadata and controls

85 lines (61 loc) · 3.88 KB

A Benchmark and Baseline for Open-Set Incremental Object Detection

Official PyTorch implementation of A Benchmark and Baseline for Open-Set Incremental Object Detection.

In real-world applications, detectors are expected to evolve and improve their perceptual abilities through incremental learning of the unknown. Open world object detection (OWOD) simulates this process by recognizing unknown objects and incrementally learning these unknown classes as labeled data becomes available. However, two issues exist in the training images of OWOD datasets: instances of both 1) unknown classes and 2) previously known classes are present. To solve them, this paper proposes Open-Set Incremental Object Detection (OSIOD), which is defined as two processes: learning from the known, and an infinite cyclical process consisting of detection of the known and the unknown, together with incremental learning of the unknown. We construct a benchmark dataset by filtering out images containing unknown and/or previously known instances, which can also be used to create a new incremental object detection scenario where images with only new classes are used for incremental learning. Additionally, we propose a baseline method for OSIOD by introducing label smoothing and finetuning for unknown class discovery and catastrophic forgetting, respectively. Comprehensive experiments provide insights into the baseline. We hope that our benchmark, baseline, and insights will promote research towards OSIOD.

Experiment Environment

Our code base is build on top of Detectron 2 library.

Build your environment by following tutorials.

Data Preparation

Open-Set Object Detection

  • Download training and testing data from releases and put them in protocol/custom_protocols/ (generated by datasets/open_set_protocol_creator.py)

Open-Set Incremental Object Detection

  • Download the MS COCO dataset

  • Copy datasets/Main/ (generated by create_t[1/2/3/4]_imageset.py, deduplicate.py, and balanced_ft.py in datasets/) to datasets/COCO2017/ImageSets/

  • Run coco_annotation_to_voc_style.py

  • Copy coco2017 train and val images to datasets/COCO2017/JPEGImages/

  • Final folder structure should be (for testing):

    OSIOD
    └── datasets
        └── COCO2017
             ├── ImageSets
             ├── Annotations
             └── JPEGImages 
    
  • Download training data from releases and put them in protocol/custom_protocols/ generated by datasets/OSIOD_protocol_creator.py

Training and Testing in Open-Set Object Detection

Training in YOLOv5 repo v6.0 using

python -m torch.distributed.launch \
--nproc_per_node 2 train.py --device 0,1 \
--batch-size 256 --epochs 400 --weights '' \
--cfg 'models/yolov5s.yaml' --label-smoothing 0.1

Testing in our code using

python main.py \
--config-file training_configs/YOLOv5.yaml \
--eval-only MODEL.WEIGHTS path/to/best.pt

Training and Testing in Open-Set Incremental Object Detection

Training

bash OSIOD.sh

Testing

bash OSIOD_eval.sh

Citation

If you find it is helpful, please consider cite this work.

@inproceedings{wan2024benchmark,
  title={A Benchmark and Baseline for Open-Set Incremental Object Detection},
  author={Wan, Qian and Wang, Shouwen},
  booktitle={2024 International Joint Conference on Neural Networks (IJCNN)},
  pages={1--8},
  year={2024},
  organization={IEEE}
}

Acknowledgement

This repository is built upon the code base of The Overlooked Elephant of Object Detection: Open Set and Towards Open World Object Detection, thanks very much!