- The HiBo-UA dataset can be approached in this link.
- Meanwhile, we also provide state-of-the-art RGB-D methods' results on HiBo-UA dataset, and you can directly download their results.
This code is mainly based on our previous project (DCF, CVPR21).
Stage 1: Run python demo_train_pre.py
, which performs the Depth Calibration Strategy.
Stage 2: Run python demo_train.py
, which performs the Fusion Strategy.
-
The related all test datasets in this paper can be found in this link (fetch code is b2p2).
-
This evaluation tool is used to evaluate the above saliency maps in this paper.
-
The training set used in this paper can be accessed in (NJUD+NLPR), code is 76gu and (NJUD+NLPR+DUT), code is 201p.
We thank all reviewers for their valuable suggestions. At the same time, thanks to the large number of researchers contributing to the development of open source in this field, particularly, Deng-ping Fan, Runmin Cong, Tao Zhou, etc.
Our feature extraction network is based on CPD backbone.
@article{Li_2022_DCBF,
author = {Li, Jingjing and Ji, Wei and Zhang, Miao and Piao, Yongri and Lu, Huchuan and Cheng, Li},
title = {Delving into Calibrated Depth for Accurate RGB-D Salient Object Detection},
journal = {International Journal of Computer Vision},
doi = {10.1007/s11263-022-01734-1},
year = {2022},
}
If you have any questions, please contact us ( wji3@ualberta.ca ).