Tianheng Cheng1, Xinggang Wang1, Shaoyu Chen1, Qian Zhang2, Wenyu Liu1,†
1 School of EIC, HUST, 2 Horizon Robotics
(†: corresponding author)
[2023-3-29]
We release code and models of BoxTeacher![2023-3-2]
BoxTeacher has been accepted by CVPR 2023🎉! We're preparing the code and models and going to open source in March![2022-10-12]
We release the initial version of BoxTeacher!
- BoxTeacher presents a novel perspective, i.e., leveraging high-quality masks, to address box-supervised instance segmentation.
- BoxTeacher explores a self-training framework with consistency training, pseudo labeling, and noise-aware losses. It's effective and can largely bridge the gap between fully-supervised methods and box-supervised methods.
- BoxTeacher is extensible, and can be applied to any instance segmentation approach. We have plans to apply it to other methods, e.g., Mask2Former, but we cannot guarantee the timeline.
Labeling objects with pixel-wise segmentation requires a huge amount of human labor compared to bounding boxes. Most existing methods for weakly supervised instance segmentation focus on designing heuristic losses with priors from bounding boxes. While, we find that box-supervised methods can produce some fine segmentation masks and we wonder whether the detectors could learn from these fine masks while ignoring low-quality masks. To answer this question, we present BoxTeacher, an efficient and end-to-end training framework for high-performance weakly supervised instance segmentation, which leverages a sophisticated teacher to generate high-quality masks as pseudo labels. Considering the massive noisy masks hurt the training, we present a mask-aware confidence score to estimate the quality of pseudo masks and propose the noise-aware pixel loss and noise-reduced affinity loss to adaptively optimize the student with pseudo masks.
Model | Backbone | Schedule | AP | APtest | Weights | Log |
---|---|---|---|---|---|---|
BoxTeacher | R-50 | 1x | 32.6 | 32.9 | ckpts | log* |
BoxTeacher | R-50 | 3x | 34.9 | 35.0 | ckpts | log* |
BoxTeacher | R-101 | 3x | 36.2 | 36.5 | ckpts | log* |
BoxTeacher | R-101-DCN | 3x | 37.2 | 37.6 | ckpts | - |
BoxTeacher | Swin-B | 3x | 40.2 | 40.5 | ckpts | - |
- *: we provide the training log with the re-implemented code.
- we have optimized the color-based pairwise loss (in BoxInst), and now training BoxTeacher (R-50, 1x) requires 20 hours with 8 3090 GPUs.
BoxTeacher is mainly developed based on detectron2 and Adelaidet.
- Install dependencies for BoxTeacher.
# install detectron2
python setup.py build develop
# install adelaidet
cd AdelaiDet
python setup.py build develop
cd ..
- Prepare the datasets for BoxTeacher.
boxteacher
datasets/
- coco/
- voc/
- cityscapes/
You can refer to detectron-doc for more details about (custom) datasets.
- Prepare the pre-trained weights for different backbones.
mkdir pretrained_models
cd pretrained_models
# download the weights with the links from the above table.
python train_net.py --config-file <path/to/config> --num-gpus 8
python train_net.py --config-file <path/to/config> --num-gpus 8 --eval MODEL.WEIGHTS <path/to/weights>
-
create the wrapper class (BoxTeacher).
-
modify the instance segmentation method (e.g., CondInst, Mask2Former) by:
-
adding
forward_teacher()
, which is the inference function to obtain the pseduo masks -
adding
box-supervised loss
andpseudo mask loss
to for training.
- train and evaluate.
BoxTeacher is based on detectron2 and Adelaidet and we sincerely thanks for their code and contribution to the community!
BoxTeacher is released under the MIT Licence.
If you find BoxTeacher is useful in your research or applications, please consider giving us a star 🌟 and citing BoxTeacher by the following BibTeX entry.
@inproceedings{Cheng2022BoxTeacher,
title = {BoxTeacher: Exploring High-Quality Pseudo Labels for Weakly Supervised Instance Segmentation},
author = {Cheng, Tianheng and Wang, Xinggang and Chen, Shaoyu and Zhang, Qian and Liu, Wenyu},
booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition,
{CVPR} 2022, New Orleans, LA, USA, June 18-24, 2022},
year = {2022}
}