Skip to content

Alignment-Free RGBT Salient Object Detection: Semantics-guided Asymmetric Correlation Network and A Unified Benchmark

Notifications You must be signed in to change notification settings

Angknpng/SACNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SACNet

Code repository for our paper entilted "Alignment-Free RGBT Salient Object Detection: Semantics-guided Asymmetric Correlation Network and A Unified Benchmark" accepted at TMM 2024.

arXiv version: https://arxiv.org/abs/2406.00917.

Citing our work

If you think our work is helpful, please cite

@article{Wang2024alignment,
  title={Alignment-Free RGBT Salient Object Detection: Semantics-guided Asymmetric Correlation Network and A Unified Benchmark},
  author={Wang, Kunpeng and Lin, Danying and Li, Chenglong and Tu, Zhengzheng and Luo, Bin},
  journal={IEEE Transactions on Multimedia},
  year={2024}
}

The Proposed Unaligned RGBT Salient Object Detection Dataset

UVT2000

We construct a novel benchmark dataset, containing 2000 unaligned visible-thermal image pairs directly captured from various real-word scenes, to facilitate research on alignment-free RGBT SOD.

avatar

The proposed dataset link can be found here. [baidu pan fetch code: irwv] or [google drive]

Dataset Statistics and Comparisons

We analyze the proposed UVT2000 datset from several statistical aspects and also conduct a comparison between UVT2000 and other existing multi-modal SOD datasets.

avatar

Overview

Framework

avatar

RGB-T SOD Performance

avatar

RGB-D SOD Performance

avatar

RGB SOD Performance

avatar

Predictions

RGB-T saliency maps can be found here. [baidu pan fetch code: xyu7] or [google drive]

RGB-D saliency maps can be found here. [baidu pan fetch code: jrjl] or [google drive]

RGB saliency maps can be found here. [baidu pan fetch code: kj6o] or [google drive]

Pretrained Models

The pretrained parameters of our models can be found here. [baidu pan fetch code: ihri] or [google drive]

Usage

Requirement

  1. Download the datasets for training and testing from here. [baidu pan fetch code: 075x]
  2. Download the pretrained parameters of the backbone from here. [baidu pan fetch code: mad3]
  3. Create directories for the experiment and parameter files.
  4. Please use conda to install torch (1.12.0) and torchvision (0.13.0).
  5. Install other packages: pip install -r requirements.txt.
  6. Set your path of all datasets in ./Code/utils/options.py.

Train

python -m torch.distributed.launch --nproc_per_node=2 --master_port=2212 train_parallel.py

Test

python test_produce_maps.py

Acknowledgement

The implement of this project is based on the following link.

Contact

If you have any questions, please contact us (kp.wang@foxmail.com).