Ziyun Yang, Somayyeh Soltanian-Zadeh and Sina Farsiu
Recently accepted by Pattern Recognition.
Paper at: https://arxiv.org/abs/2103.00334
BiconNet also achieved great improvements on medical segmentation and multi-class, please visit BiconNet-Medical for details
Please also check our latest DconnNet paper on CVPR 2023
Requirement: Pytorch 1.7.1
This code including three parts:
- Codes for customizing BiconNet wtih other backbones (/general)
- Codes for reproducing the paper results (/paper_result)
- Evaluation Code (/evaluation)
If you want to construct the BiconNet based on your own network, there are four simple steps:
- replace your network's one-channel output fully connected layers with 8-channel FC layers.
For training:
-
generate the ground truth connectivity masks using the function 'sal2conn' in utils_bicon.py
-
replace your own loss function with Bicon_loss: you can edit the connect_loss.py
For testing:
- use the function 'bv_test' in utils_bicon.py after you get the 8-channel connectivity map output to get your final saliency prediction.
For traing: cd /MODEL_NAME/bicon/train python train.py
For testing: cd /MODEL_NAME/bicon/test python test.py
The pretrained models and maps can be downloaded from Google Drive
We use Matlab to evaluate the output saliency maps as did in: https://github.com/JosephChenHub/GCPANet
If you find this work useful in your research, please consider citing:
"Z. Yang, S. Soltanian-Zadeh, and S. Farsiu, "BiconNet: An Edge-preserved Connectivity-based Approach for Salient Object Detection", Pattern Recognition 121, 108231 (2022)"