The training and testing experiments are conducted using PyTorch with 8 Tesla V100 GPUs of 36 GB Memory.
Note that FSPNet is only tested on Ubuntu OS with the following environments.
- Creating a virtual environment in terminal:
conda create -n FSPNet python=3.8
. - Installing necessary packages:
pip install -r requirements.txt
- Download the training set (COD10K-train) used for training
- Download the testing sets (COD10K-test + CAMO-test + CHAMELEON + NC4K ) used for testing
- The pretrained model is stored in Google Drive and Baidu Drive (xuwb). After downloading, please change the file path in the corresponding code.
- Run
train.sh
orslurm_train.sh
as needed to train.
Our well-trained model is stored in Google Drive and Baidu Drive (otz5). After downloading, please change the file path in the corresponding code.
- Matlab code: One-key evaluation is written in MATLAB code, please follow this the instructions in
main.m
and just run it to generate the evaluation results. - Python code: After configuring the test dataset path, run
slurm_eval.py
in therun_slurm
folder for evaluation.
The prediction results of our FSPNet are stored on Google Drive and Baidu Drive (ryzg) please check.
@inproceedings{Huang2023Feature,
title={Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers},
author={Huang, Zhou and Dai, Hang and Xiang, Tian-Zhu and Wang, Shuo and Chen, Huai-Xin and Qin, Jie and Xiong, Huan},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
Thanks to Deng-Ping Fan, Ge-Peng Ji, et al. for a series of efforts in the field of COD.