This repository is the official PyTorch implementation of our NeurIPS(2020) paper.
You can switch the branch to "CN", to view README.md (Chinese version) and obtain codes with Chinese comments.
(您可以将 branch 切换到 "CN",以查看中文版 README.md 并获取带有中文注释的代码)
Our training set is a subset of the COCO dataset, containing 9213 images.
-
COCO9213-os.zip (images with original size, 4.53GB), GoogleDrive | BaiduYun (fetch code: 5183).
-
COCO9213.zip (images resized to 224*224, 943MB), GoogleDrive | BaiduYun (fetch code: 8d7z).
-
MSRC (7 groups, 233 images) ''Object Categorization by Learned Universal Visual Dictionary, ICCV(2005)''
-
iCoseg (38 groups, 643 images) ''iCoseg: Interactive Co-segmentation with Intelligent Scribble Guidance, CVPR(2010)''
-
Cosal2015 (50 groups, 2015 images) Detection of Co-salient Objects by Looking Deep and Wide, IJCV(2016)''
You can download them from:
-
test-datasets (resized to 224*224, 77MB), GoogleDrive | BaiduYun (fetch code: oq5w).
-
test-datasets-os (original sizes, 142MB), GoogleDrive | BaiduYun (fetch code: ujdl).
-
CoSOD3k (160 groups, 3316 images) ''Taking a Deeper Look at the Co-salient Object Detection, CVPR(2020)''
-
CoCA (80 groups, 1295 images) ''Gradient-Induced Co-Saliency Detection, ECCV(2020)''
We provide pre-trained ICNet based on SISMs produced by pre-trained EGNet (VGG16-based).
- ICNet_vgg16.pth (70MB), GoogleDrive | BaiduYun (fetch code: nkj9).
We release the co-saliency maps (predictions) generated by our ICNet on 5 benchmark datasets:
MSRC, iCoseg, Cosal2015, CoCA, and CoSOD3k.
-
cosal-maps.zip (results of size 224*224, 20MB), GoogleDrive | BaiduYun (fetch code: du5e).
-
cosal-maps-os.zip (results resized to original sizes, 62MB), GoogleDrive | BaiduYun (fetch code: xwcv).
Our ICNet can be trained and tested based on SISMs produced by any off-the-shelf SOD method, but you are suggested to use the same SOD method to generate SISMs in training and test phases to keep the consistency.
In our paper, we choose the pre-trained EGNet (VGG16-based) as the basic SOD method to produce SISMs, you can downloaded these SISMs directly from:
- EGNet-SISMs (resized to 224*224, 125MB), GoogleDrive | BaiduYun (fetch code: ae6a).
-
Download pre-trained VGG16 from:
- vgg16_feat.pth (56MB), GoogleDrive | BaiduYun (fetch code: j0zq).
-
Follow instructions in "./ICNet/train.py" to modify training settings.
-
Run:
python ./ICNet/train.py
-
-
Test pre-trained ICNet:
Download pre-trained ICNet "ICNet_vgg16.pth" (the download link is given above).
-
Test ICNet trained by yourself:
Choose the checkpoint file "Weights_i.pth" (saved after i-th epoch automatically) you want to load for test.
-
-
Follow instructions in "./ICNet/test.py" to modify test settings.
-
Run:
python ./ICNet/test.py
The folder "./ICNet/evaluator/" contains evaluation codes implemented in PyTorch (GPU-version), the metrics include max F-measure, S-measure, and MAE.
-
Follow instructions in "./ICNet/evaluate.py" to modify evaluation settings.
-
Run:
python ./ICNet/evaluate.py
We compare our ICNet with 7 state-of-the-art Co-SOD methods:
-
CBCS ''Cluster-Based Co-Saliency Detection, TIP(2013)''
-
CSHS ''Co-Saliency Detection Based on Hierarchical Segmentation, SPL(2014)''
-
CoDW ''Detection of Co-salient Objects by Looking Deep and Wide, IJCV(2016)''
-
UCSG ''Unsupervised CNN-based Co-Saliency Detection with Graphical Optimization, ECCV(2018)''
-
CSMG ''Co-saliency Detection via Mask-guided Fully Convolutional Networks with Multi-scale Label Smoothing, CVPR(2019)''
-
MGLCN ''A Unified Multiple Graph Learning and Convolutional Network Model for Co-saliency Estimation, ACM MM(2019)''
-
GICD ''Gradient-Induced Co-Saliency Detection, ECCV(2020)''
You can download predictions of these methods from:
- compared_methods (original sizes, 445MB), GoogleDrive | BaiduYun (fetch code: s7pr).
To be updated.
If you have any questions, feel free to contact me (Wen-Da Jin) at jwd331@126.com, I will reply as soon as possible.