Skip to content

The code for "Point2Mask: Point-supervised Panoptic Segmentation via Optimal Transport", ICCV2023

License

Notifications You must be signed in to change notification settings

LiWentomng/Point2Mask

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Point2Mask: Point-supervised Panoptic Segmentation via Optimal Transport

Wentong Li, Yuqian Yuan, Song Wang, Jianke Zhu, Jianshu Li, Jian Liu, and Lei Zhang

Paper (arXiv). ICCV2023.

Environment Setup

conda create -n point2mask python=3.8 -y
conda activate point2mask
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install openmim
mim install mmdet==2.17.0
mim install mmcv-full==1.3.9
git clone https://github.com/LiWentomng/Point2Mask.git
cd Point2Mask
pip install -r requirements.txt
sh setup.sh

Model Zoo

1.Single-point Supervision(P1)

Pascal VOC

Backbone Supervision Models PQ PQ_th PQ_st
ResNet-50 P1 model 53.7 51.9 90.5
ResNet-101 P1 model 54.8 53.0 90.4
Swin-L P1 model 61.0 59.4 93.0

COCO

Backbone Supervision Models PQ PQ_th PQ_st
ResNet-50 P1 model 32.4 32.6 32.2
ResNet-101 P1 model 34.0 34.3 33.5
Swin-L P1 model 37.0 37.0 36.9

Pascal VOC with COCO P1-pretrained model.

Backbone Supervision Models PQ PQ_th PQ_st
ResNet-50 P1 model 60.7 59.1 91.8
ResNet-101 P1 model 63.2 61.8 92.3
Swin-L P1 model 64.2 62.7 93.2

2.Ten-point Supervision(P10)

Pascal VOC

Backbone Supervision Models PQ PQ_th PQ_st
ResNet-50 P10 model 59.1 57.5 91.8
ResNet-101 P10 model 60.2 58.6 92.1

COCO

Backbone Supervision Models PQ PQ_th PQ_st
ResNet-50 P10 model 32.4 32.6 32.2
ResNet-101 P10 model 36.7 37.3 35.7

Get Started

We use Pascal VOC and COCO datasets, please see Preparing datasets for Point2Mask.

Demo

To test our model with an input image, please run demo.py

python  demo.py --config-file ./configs/point2mask/voc/point2mask_voc_wsup_r50.py --weights /path/to/coco_r50.pth --input image.jpg --out-file prediction.jpg

Training

For VOC training:

CUDA_VISIBLE_DEVICE=0,1,2,3  tools/dist_train.sh configs/point2mask/voc/point2mask_voc_wsup_r50.py 4

For COCO training:

CUDA_VISIBLE_DEVICE=0,1,2,3,4,5,6,7  tools/dist_train.sh configs/point2mask/coco/point2mask_coco_wsup_r50.py 8

Note: our models for Pascal VOC are trained with 3090/V100 gpus, and for COCO with A100 GPUs. The Structured Edge (SE) model used for low-level edege is here.

Test

For PQ evaluation:

CUDA_VISIBLE_DEVICE=0,1,2,3  tools/dist_test.sh configs/point2mask/voc/point2mask_voc_wsup_r50.py work_dirs/xxx.pth 4  --eval pq

For visual results:

CUDA_VISIBLE_DEVICE=0 tools/dist_test.sh configs/point2mask/voc/point2mask_voc_wsup_r50.py work_dirs/xxx.pth 1 --show-dir xxx

Visual Examples

Visual Results on COCO with ResNet-101.

Visualization of the learning high-level boundary map.

Acknowledgement

Code is largely based on PSPS, Panoptic Segformer, MMdetection.
Thanks for their great open-source projects!

Citation

@inproceedings{point2mask,
  title={Point2Mask: Point-supervised Panoptic Segmentation via Optimal Transport},
  author={Wentong Li, Yuqian Yuan, Song Wang, Jianke Zhu, Jianshu Li, Jian Liu and Lei Zhang},
  booktitle={Proceedinngs of IEEE International Conference on Computer Vision},
  year={2023}
}