The implementation of Learning pseudo labels for semi-and-weakly supervised semantic segmentation.
You can also access the code from gitee.
In this paper, we aim to tackle semi-and-weakly supervised semantic segmentation (SWSSS), where many image-level classification labels and a few pixel-level annotations are available. We believe the most crucial point for solving SWSSS is to produce high-quality pseudo labels, and our method deals with it from two perspectives. Firstly, we introduce a class-aware cross entropy (CCE) loss for network training. Compared to conventional cross entropy loss, CCE loss encourages the model to distinguish concurrent classes only and simplifies the learning target of pseudo label generation. Secondly, we propose a progressive cross training (PCT) method to build cross supervision between two networks with a dynamic evaluation mechanism, which progressively introduces high-quality predictions as additional supervision for network training. Our method significantly improves the quality of generated pseudo labels in the regime with extremely limited annotations. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods significantly.
-
Download the repository.
git clone https://github.com/YudeWang/Learning-Pseudo-Label.git cd Learning-Pseudo-Label
-
Create anaconda environment and install python dependencies.
conda create -n semiweak python=3.8 conda activate semiweak pip install -r requirements.txt
-
Create softlink to your dataset. Make sure that the dataset can be accessed by
$your_dataset_path/VOCdevkit/VOC2012...
ln -s $your_dataset_path data
All the experiments of this work are placed in .experiment/deeplabv3+_voc_swsss/
. Our approach is a two-stage method, which trains the network to generate pseudo labels (stage-1) firstly and then retrains another network for final prediction (stage-2).
cd experiment/deeplabv3+_voc_swsss/
We provide a script run.py
in experiment folder including both train & test & inference of two stages.
export CUDA_VISIBLE_DEVICES=0
python run.py
Suppose you want to run each step individually, please check the stage-1 configuration file config.py
and stage-2 configuration file config_retrain.py
firstly to meet your custom setting. And then run the corresponding python script for train/test/inference.
Step | Command | Config file |
---|---|---|
stage 1 - Train the model for pseudo label generation | python train.py |
config.py |
stage 1 - Evaluate pseudo label on val set (w/ image-level labels) | python test.py |
config.py |
stage 1 - Generate pseudo label on trainaug set | python inference.py |
config.py |
stage 2 - Retrain another model | python retrain.py |
config_retrain.py |
stage 2 - Evaluate retrained model on val set (w/o image-level labels) | python retest.py |
config_retrain.py |
Tips: follow the run.py
to modify the experiment setting in config.py
and config_retrain.py
.
Here are some trained models file:
Size of strongly labeled subset | HybridNet (eval w/ image-level labels) | PseudoNet (eval w/ image-level labels) | Retrained model (eval w/o image-level labels) |
---|---|---|---|
92 | 77.9% mIoU Google Drive/Baidu Drive |
78.3% mIoU Google Drive/Baidu Drive |
76.2% mIoU Google Drive/Baidu Drive |
183 | 79.7% mIoU Google Drive/Baidu Drive |
79.2% mIoU Google Drive/Baidu Drive |
77.6% mIoU Google Drive/Baidu Drive |
366 | 81.7% mIoU Google Drive/Baidu Drive |
82.4% mIoU Google Drive/Baidu Drive |
78.7% mIoU Google Drive/Baidu Drive |
732 | 83.7% mIoU Google Drive/Baidu Drive |
83.9% mIoU Google Drive/Baidu Drive |
79.9% mIoU Google Drive/Baidu Drive |
1464 | 86.2% mIoU Google Drive/Baidu Drive |
86.2% mIoU Google Drive/Baidu Drive |
81.2% mIoU Google Drive/Baidu Drive |
Please cite our paper if the code is helpful for your research.
@article{wang2022learning, title={Learning Pseudo Labels for Semi-and-weakly Supervised Semantic Segmentation}, author={Wang, Yude and Zhang, Jie and Kan, Meina and Shan, Shiguang}, journal={Pattern Recognition}, pages={108925}, year={2022}, publisher={Elsevier} }