Simultaneous scene flow estimation and change detection. This is the official implementation of our paper: Dual Task Learning by Leveraging Both Dense Correspondence and Mis-Correspondence for Robust Change Detection With Imperfect Matches (CVPR2022).
- Imperfect match download links (March 28, 2022)
- A synthetic dataset generation script (March 31, 2022)
- Training & evaluation scripts
- A demo script
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
- CUDA = 11.1
- Python 3.7
- Pytorch 1.9.1
We tested the code with CUDA 11.1 on Ubuntu 20.04. SimSaC may work in other envionments.
- Install requirements
pip install -r requirements.txt
- Install cupy (Modify the CUDA version listed below to suit your environment).
pip install cupy-cuda111 --no-cache-dir
Download the following pretrained model and place it under the root directory.
We use a combination of COCO, DPED, CityScapes, and ADE-20K datasets, where objects in COCO are used as foregrounds and where images from DPED, CityScapes, and ADE-20K datasets are used as backgrounds. For the flow and background generation of ref. and query, we used the synthetic flow dataset generation code from GLU-Net resulting in 40,000 pairs. For the change mask foreground generation, we utilize Copy-Paste from Copy-paste-aug. We create 5 change detection pairs for each background pair, resulting in a total of 200,000 pairs.
Download and put all the datasets (DPED, CityScapes, ADE-20K, COCO) in the same directory. The directory should be organized as follows:
/source_datasets/
original_images/
CityScape/
CityScape_extra/
ADEChallengeData2016/
coco/
To generate the synthetic change detection dataset and save it to disk:
python save_change_training_dataset_to_disk.py --save_dir synthetic
It will create the image pairs, flow fields, and change masks in save_dir/images, save_dir/flow, save_dir/mask respectively.
The process can take a day or more, because the copy-paste is time consuming. Add --plot True
to plot the generated samples as follows:
- Download the ChangeSim Dataset
- Download the VL-CMU-CD Dataset
- Download the PCD Dataset (You need to contact the author of PCD for the access of the augmented PCD)
Download and put all the datasets in the same directory. The directory should be organized as follows:
/datasets/
ChangeSim/
Query_Seq_Train/
Query_Seq_Test/
VL-CMU-CD/
pcd_5cv/
- Download the imperfect matches and put all the txt files in the same directory named
imperfect_matches
. - Each line of the txt files represents a sample, in the format of reference image path, query image path, ground-truth path, and match validity (1 or 0).
Run the following command to train the model on both synthetic and changesim.
python train.py \
--pretrained "" \
--n_threads 4 --split_ratio 0.90 --split2_ratio 0.5 \
--trainset_list synthetic changesim_normal \
--testset_list changesim_dust \
--lr 0.0002 --n_epoch 25 \
--test_interval 10
--plot_interval 10
--name_exp joint_synthetic_changesim
Here, the model is evaluated every 10-th epochs and the results are visualized every 10-th batches of the evaluation.
For more results, see results.
We heavily borrow code from public projects, such as GLU-Net, DGC-Net, PWC-Net, NC-Net, Flow-Net-Pytorch...
This work was supported in part by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by Korean Government (MSIT) (No.2020-0-00440, Development of Artificial Intelligence Technology that Continuously Improves Itself as the Situation Changes in the Real World) and in part by the IITP grant funded by MSIT (No.2019-0-01842, Artificial Intelligence Graduate School Program (GIST)).
This project is licensed under the GPL-3.0 License - see the LICENSE.md file for details
Please consider citing this project in your publications if you find this helpful. The following is the BibTeX.
@inproceedings{park2022simsac,
title={Dual Task Learning by Leveraging Both Dense Correspondence and Mis-Correspondence for Robust Change Detection With Imperfect Matches},
author = {Jin-Man Park and
Ue-Hwan Kim and
Seon-Hoon Lee and
Jong-Hwan Kim},
year = {2022},
booktitle = {2022 {IEEE} Conference on Computer Vision and Pattern Recognition, {CVPR} 2022}
}