Exploiting Saliency for Object Segmentation from Image Level Labels, CVPR'17
There have been remarkable improvements in the semantic labelling task in the recent years. However, the state of the art methods rely on large-scale pixel-level annotations. This paper studies the problem of training a pixel-wise semantic labeller network from image-level annotations of the present object classes. Recently, it has been shown that high quality seeds indicating discriminative object regions can be obtained from image-level labels. Without additional information, obtaining the full extent of the object is an inherently ill-posed problem due to co-occurrences. We propose using a saliency model as additional information and hereby exploit prior knowledge on the object extent and image statistics. We show how to combine both information sources in order to recover 80% of the fully supervised performance - which is the new state of the art in weakly supervised training for pixel-wise semantic labelling.
Clone this repository recursively.
$ git clone https://github.com/coallaoh/GuidedLabelling.git --recursive
$ cd caffe
Follow caffe installation to configure Makefile.config, and run
$ make -j50 && make pycaffe
Download precomputed saliency maps, network initialisations, train_aug.txt etc.
Please modify the path PASCALROOT
in downloads.sh
to indicate the root directory for your Pascal VOC database, and run
$ ./downloads.sh
Install Python requirements.
$ pip install numpy && pip install scipy && pip install -r ./pip-requirements
Install OpenCV for python, following the instructions in: http://opencv.org.
Install PyDenseCRF (https://github.com/lucasb-eyer/pydensecrf).
$ pip install git+https://github.com/lucasb-eyer/pydensecrf.git
For every image, you compute (1) a seed heatmap, (2) a saliency map, and (3) a guide labelling as the training ground truth for the segmentation network (DeepLab in this case).
The following script does
- Seed network training.
- Computation of seed heatmaps for each image.
- Generate guide labels by combining seed and saliency.
- Train semantic segmentation network using guide labels.
- Test and evaluate the segmentation network.
$ ./script.py
Before running, please change the variable PASCALROOT
to indicate the root directory for your Pascal VOC database, and set the variable GPU
to the gpu device number of your choice. Please read the script for greater details.
The final segmentation performance on the Pascal val set is computed automatically: it should be 51.419 and 56.153 mIoU, before and after the CRF postprocessing, respectively. They are slightly better than what is reported in our paper (51.2 and 55.7 respectively).
$ src/
contains additional scripts for e.g. evaluating seed performance. Please read the source scripts and/or run for example
$ ./src/seed/train.py -h
to see options for playing with experimental parameters.
For any problem with implementation or bug, please contact Seong Joon Oh (coallaoh at gmail).
@inproceedings{joon17cvpr,
title = {Exploiting Saliency for Object Segmentation from Image Level Labels},
author = {Oh, Seong Joon and Benenson, Rodrigo and Khoreva, Anna and Akata, Zeynep and Fritz, Mario and Schiele, Bernt},
year = {2017},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
note = {to appear},
pubstate = {published},
tppubtype = {inproceedings}
}