Skip to content

Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection

Notifications You must be signed in to change notification settings

hitachi-rd-cv/MILA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection

by Onkar Krishna, Hiroki Ohashi and Saptarshi Sinha.

This repository contains the code for the paper 'MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection,' which has been accepted for oral presentation at BMVC 2023.

Requirements

The environment required to successfully reproduce our results primarily includes.

Python >= 3.8
CUDA == 10.1
PyTorch == 1.7.0+cu101
detectron2 == 0.5 

Please refer to the instructions for guidance on installing Detectron2

Datasets

Please download and arrange the following datasets:

Ensure that you organize these datasets in the same manner as demonstrated in the Adaptive Teacher repository.

Additionally, for the following datasets:

Please arrange them as following:

MILA/
└── datasets/
    └── sim10k/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
   └── comic/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/

How to run the code

  • Train the MILA using Sim10k as the source domain and Cityscapes as the target domain
python train_net_mem.py \
      --num-gpus 4 \
      --config configs/faster_rcnn_R101_cross_sim10k_13031.yaml\
      OUTPUT_DIR output/sim10k_ckpt
  • Train the MILA with Pascal VOC as the source domain and Comic2k as the target domain.
python train_net_mem.py \
      --num-gpus 1 \
      --config configs/faster_rcnn_R101_cross_comic_08032.yaml\
      OUTPUT_DIR output/comic_ckpt
  • Train the MILA with Cityscapes as the source domain and Foggy Cityscapes as the target domain. you need to install the following packages: pip install cityscapesScripts and pip install shapely.
python train_net_mem.py \
      --num-gpus 1 \
      --config configs/faster_rcnn_VGG_cross_city_07021_3.yaml \
      OUTPUT_DIR output/foggy_ckpt
  • For evaluation,
python train_net_mem.py \
      --eval-only \
      --num-gpus 4 \
      --config configs/faster_rcnn_R101_cross_sim10k_13031.yaml \
      MODEL.WEIGHTS <your weight>.pth

Acknowledgement

This repository is based on the code from Adaptive Teacher. We thank authors for their contributions.

Citation

If you use our code, please consider citing our paper as

@article{krishna2023mila,
  title={MILA: Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection},
  author={Krishna, Onkar and Ohashi, Hiroki and Sinha, Saptarshi},
  journal={arXiv preprint arXiv:2309.01086},
  year={2023}
}

For queries, contact at onkar.krishna.vb@hitachi.com

About

Memory-Based Instance-Level Adaptation for Cross-Domain Object Detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages