- Our new paper Scale-Aware Domain Adaptive Faster R-CNN has been accepted by IJCV. The corresponding code is maintained under sa-da-faster.
This is a Caffe2 implementation of 'Domain Adaptive Faster R-CNN for Object Detection in the Wild', implemented by Haoran Wang(whrzxzero@gmail.com). The original paper can be found here. This implementation is built on Detectron @ 5ed75f9.
If you find this repository useful, please cite the oringinal paper:
@inproceedings{chen2018domain,
title={Domain Adaptive Faster R-CNN for Object Detection in the Wild},
author = {Chen, Yuhua and Li, Wen and Sakaridis, Christos and Dai, Dengxin and Van Gool, Luc},
booktitle = {Computer Vision and Pattern Recognition (CVPR)},
year = {2018}
}
and Detectron:
@misc{Detectron2018,
author = {Ross Girshick and Ilija Radosavovic and Georgia Gkioxari and
Piotr Doll\'{a}r and Kaiming He},
title = {Detectron},
howpublished = {\url{https://github.com/facebookresearch/detectron}},
year = {2018}
}
Please follow the instruction in Detectron to install and use Detectron-DomainAdaptive-Faster-RCNN.
An example of adapting from Sim10k dataset to Cityscapes dataset is provided:
-
Download the Cityscapes datasets from here and Sim10k datasets from here.
-
Convert the labels of Cityscapes datasets and Sim10k datasets to coco format using the scripts 'tools/convert_cityscapes_to_caronly_coco.py' and 'tools/convert_sim10k_to_coco.py'.
-
Convert ImageNet-pretrained VGG16 Caffe model to Detectron format with 'tools/pickle_caffe_blobs.py' or use my converted VGG16 model in here
-
Train the Domain Adaptive Faster R-CNN:
cd $DETECTRON python2 tools/train_net.py --cfg configs/da_faster_rcnn_baselines/e2e_da_faster_rcnn_vgg16-sim10k.yaml
-
Test the trained model:
cd $DETECTRON python2 tools/test_net.py --cfg configs/da_faster_rcnn_baselines/e2e_da_faster_rcnn_vgg16-sim10k.yaml TEST.WEIGHTS /<path_to_trained_model>/model_final.pkl NUM_GPUS 1
The best results for different adaptation are reported. Due to the instable nature of adversarial training, the best models are obtained through a model selection on a randomly picked mini validation set.
image | instsnace | consistency | car AP | Pretrained models | |
---|---|---|---|---|---|
Faster R-CNN | 32.58 | ||||
DA Faster R-CNN | ✓ | 38.60 | model | ||
DA Faster R-CNN | ✓ | 35.55 | model | ||
DA Faster R-CNN | ✓ | ✓ | 39.23 | model | |
DA Faster R-CNN | ✓ | ✓ | ✓ | 40.01 | model |
da-faster-rcnn based on Caffe. (original code by paper authors)
Domain-Adaptive-Faster-RCNN-PyTorch based on PyTorch and maskrcnn-benchmark.
sa-da-faster based on PyTorch and maskrcnn-benchmark.