rschange is an open-source change detection toolbox, which is dedicated to reproducing and developing advanced methods for change detection of remote sensing images.
-
Supported Methods
-
STNet (ICME2023)
-
DDLNet (ICME2024 oral)
-
CDMask (Under review)
-
CD-Mamba (Under review, updated soon, refer to this)
-
CDXFormer (Under review)
-
Other popular methods including
BIT (TGRS2021), SNUNet (GRSL2021), ChangeFormer (IGARSS2022),
LGPNet(TGRS2021), SARAS-Net (AAAI2023), USSFCNet (TGRS2023), AFCF3DNet (TGRS2023)
-
-
Supported Datasets
-
Supported Tools
- Training
- Testing
- Params and FLOPs counting
- Class activation maps
-
2024/07/14
: Class activation maps and some other popular methods (BIT, SNUNet, ChangeFormer, LGPNet, SARAS-Net) are now supported. -
2024/06/24
: CDMask has been submitted to Arxiv, see here, and the official implementation of CDMask is available!
-
Environment preparation
conda create -n rscd python=3.9 conda activate rscd conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia pip install -r requirements.txt
Note: same as rsseg. If you have already installed the environment of rsseg, use it directly.
-
Dataset preprocessing
LEVIR-CD:The original images are sized at 1024x1024. Following its original division method, we crop these images into non-overlapping patches of 256x256.
WHU-CD: It contains a pair of dual-time aerial images measuring 32507 × 15354. These images are cropped into patches of 256 × 256 size. The dataset is then randomly divided into three subsets: the training set, the validation set, and the test set, following a ratio of 8:1:1.
DSIFN-CD & CLCD & SYSU-CD: They all follow the original image size and dataset division method.
Note: We also provide the pre-processed data, which can be downloaded at this link
Prepare the following folders to organize this repo:
rschangedetection
├── rscd (code)
├── work_dirs (save the model weights and training logs)
│ └─CLCD_BS4_epoch200 (dataset)
│ └─stnet (model)
│ └─version_0 (version)
│ │ └─ckpts
│ │ ├─test (the best ckpts in test set)
│ │ └─val (the best ckpts in validation set)
│ ├─log (tensorboard logs)
│ ├─train_metrics.txt (train & val results per epoch)
│ ├─test_metrics_max.txt (the best test results)
│ └─test_metrics_rest.txt (other test results)
└── data
├── LEVIR_CD
│ ├── train
│ │ ├── A
│ │ │ └── images1.png
│ │ ├── B
│ │ │ └── images2.png
│ │ └── label
│ │ └── label.png
│ ├── val (the same with train)
│ └── test(the same with train)
├── DSIFN
│ ├── train
│ │ ├── t1
│ │ │ └── images1.jpg
│ │ ├── t2
│ │ │ └── images2.jpg
│ │ └── mask
│ │ └── mask.png
│ ├── val (the same with train)
│ └── test
│ ├── t1
│ │ └── images1.jpg
│ ├── t2
│ │ └── images2.jpg
│ └── mask
│ └── mask.tif
├── WHU_CD
│ ├── train
│ │ ├── image1
│ │ │ └── images1.png
│ │ ├── image2
│ │ │ └── images2.png
│ │ └── label
│ │ └── label.png
│ ├── val (the same with train)
│ └── test(the same with train)
├── CLCD (the same with WHU_CD)
└── SYSU_CD
├── train
│ ├── time1
│ │ └── images1.png
│ ├── time2
│ │ └── images2.png
│ └── label
│ └── label.png
├── val (the same with train)
└── test(the same with train)
-
Training
python train.py -c configs/STNet.py
-
Testing
python test.py \ -c configs/STNet.py \ --ckpt work_dirs/CLCD_BS4_epoch200/stnet/version_0/ckpts/test/epoch=45.ckpt \ --output_dir work_dirs/CLCD_BS4_epoch200/stnet/version_0/ckpts/test \
-
Count params and flops
python tools/params_flops.py --size 256
-
Class activation maps
python tools/grad_cam_CNN.py -c configs/cdxformer.py --layer=model.net.decoderhead.LHBlock2.mlp_l
If you are interested in our work, please consider giving a 🌟 and citing our work below. We will update rschange regularly.
@inproceedings{stnet,
title={STNet: Spatial and Temporal feature fusion network for change detection in remote sensing images},
author={Ma, Xiaowen and Yang, Jiawei and Hong, Tingfeng and Ma, Mengting and Zhao, Ziyan and Feng, Tian and Zhang, Wei},
booktitle={2023 IEEE International Conference on Multimedia and Expo (ICME)},
pages={2195--2200},
year={2023},
organization={IEEE}
}
@INPROCEEDINGS{ddlnet,
author={Ma, Xiaowen and Yang, Jiawei and Che, Rui and Zhang, Huanting and Zhang, Wei},
booktitle={2024 IEEE International Conference on Multimedia and Expo (ICME)},
title={DDLNet: Boosting Remote Sensing Change Detection with Dual-Domain Learning},
year={2024},
volume={},
number={},
pages={1-6},
doi={10.1109/ICME57554.2024.10688140}}
@article{cdmask,
title={Rethinking Remote Sensing Change Detection With A Mask View},
author={Ma, Xiaowen and Wu, Zhenkai and Lian, Rongrong and Zhang, Wei and Song, Siyang},
journal={arXiv preprint arXiv:2406.15320},
year={2024}
}
If you are confused about the content of our paper or look forward to further academic exchanges and cooperation, please do not hesitate to contact us. The e-mail address is xwma@zju.edu.cn. We look forward to hearing from you!
Thanks to previous open-sourced repo:
Thanks to the main contributor Zhenkai Wu