Fully Convolutional Networks for Semantic Segmentation
Official Repo
Code Snippet
Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.
@article{shelhamer2017fully,
title={Fully convolutional networks for semantic segmentation},
author={Shelhamer, Evan and Long, Jonathan and Darrell, Trevor},
journal={IEEE transactions on pattern analysis and machine intelligence},
volume={39},
number={4},
pages={640--651},
year={2017},
publisher={IEEE Trans Pattern Anal Mach Intell}
}
Method |
Backbone |
Crop Size |
Lr schd |
Mem (GB) |
Inf time (fps) |
mIoU |
mIoU(ms+flip) |
config |
download |
FCN |
R-50-D8 |
512x1024 |
40000 |
5.7 |
4.17 |
72.25 |
73.36 |
config |
model | log |
FCN |
R-101-D8 |
512x1024 |
40000 |
9.2 |
2.66 |
75.45 |
76.58 |
config |
model | log |
FCN |
R-50-D8 |
769x769 |
40000 |
6.5 |
1.80 |
71.47 |
72.54 |
config |
model | log |
FCN |
R-101-D8 |
769x769 |
40000 |
10.4 |
1.19 |
73.93 |
75.14 |
config |
model | log |
FCN |
R-18-D8 |
512x1024 |
80000 |
1.7 |
14.65 |
71.11 |
72.91 |
config |
model | log |
FCN |
R-50-D8 |
512x1024 |
80000 |
- |
|
73.61 |
74.24 |
config |
model | log |
FCN |
R-101-D8 |
512x1024 |
80000 |
- |
- |
75.13 |
75.94 |
config |
model | log |
FCN (FP16) |
R-101-D8 |
512x1024 |
80000 |
5.37 |
8.64 |
76.80 |
- |
config |
model | log |
FCN |
R-18-D8 |
769x769 |
80000 |
1.9 |
6.40 |
70.80 |
73.16 |
config |
model | log |
FCN |
R-50-D8 |
769x769 |
80000 |
- |
- |
72.64 |
73.32 |
config |
model | log |
FCN |
R-101-D8 |
769x769 |
80000 |
- |
- |
75.52 |
76.61 |
config |
model | log |
FCN |
R-18b-D8 |
512x1024 |
80000 |
1.6 |
16.74 |
70.24 |
72.77 |
config |
model | log |
FCN |
R-50b-D8 |
512x1024 |
80000 |
5.6 |
4.20 |
75.65 |
77.59 |
config |
model | log |
FCN |
R-101b-D8 |
512x1024 |
80000 |
9.1 |
2.73 |
77.37 |
78.77 |
config |
model | log |
FCN |
R-18b-D8 |
769x769 |
80000 |
1.7 |
6.70 |
69.66 |
72.07 |
config |
model | log |
FCN |
R-50b-D8 |
769x769 |
80000 |
6.3 |
1.82 |
73.83 |
76.60 |
config |
model | log |
FCN |
R-101b-D8 |
769x769 |
80000 |
10.3 |
1.15 |
77.02 |
78.67 |
config |
model | log |
FCN (D6) |
R-50-D16 |
512x1024 |
40000 |
3.4 |
10.22 |
77.06 |
78.85 |
config |
model | log |
FCN (D6) |
R-50-D16 |
512x1024 |
80000 |
- |
10.35 |
77.27 |
78.88 |
config |
model | log |
FCN (D6) |
R-50-D16 |
769x769 |
40000 |
3.7 |
4.17 |
76.82 |
78.22 |
config |
model | log |
FCN (D6) |
R-50-D16 |
769x769 |
80000 |
- |
4.15 |
77.04 |
78.40 |
config |
model | log |
FCN (D6) |
R-101-D16 |
512x1024 |
40000 |
4.5 |
8.04 |
77.36 |
79.18 |
config |
model | log |
FCN (D6) |
R-101-D16 |
512x1024 |
80000 |
- |
8.26 |
78.46 |
80.42 |
config |
model | log |
FCN (D6) |
R-101-D16 |
769x769 |
40000 |
5.0 |
3.12 |
77.28 |
78.95 |
config |
model | log |
FCN (D6) |
R-101-D16 |
769x769 |
80000 |
- |
3.21 |
78.06 |
79.58 |
config |
model | log |
FCN (D6) |
R-50b-D16 |
512x1024 |
80000 |
3.2 |
10.16 |
76.99 |
79.03 |
config |
model | log |
FCN (D6) |
R-50b-D16 |
769x769 |
80000 |
3.6 |
4.17 |
76.86 |
78.52 |
config |
model | log |
FCN (D6) |
R-101b-D16 |
512x1024 |
80000 |
4.3 |
8.46 |
77.72 |
79.53 |
config |
model | log |
FCN (D6) |
R-101b-D16 |
769x769 |
80000 |
4.8 |
3.32 |
77.34 |
78.91 |
config |
model | log |
Method |
Backbone |
Crop Size |
Lr schd |
Mem (GB) |
Inf time (fps) |
mIoU |
mIoU(ms+flip) |
config |
download |
FCN |
R-50-D8 |
512x512 |
80000 |
8.5 |
23.49 |
35.94 |
37.94 |
config |
model | log |
FCN |
R-101-D8 |
512x512 |
80000 |
12 |
14.78 |
39.61 |
40.83 |
config |
model | log |
FCN |
R-50-D8 |
512x512 |
160000 |
- |
- |
36.10 |
38.08 |
config |
model | log |
FCN |
R-101-D8 |
512x512 |
160000 |
- |
- |
39.91 |
41.40 |
config |
model | log |
Method |
Backbone |
Crop Size |
Lr schd |
Mem (GB) |
Inf time (fps) |
mIoU |
mIoU(ms+flip) |
config |
download |
FCN |
R-50-D8 |
512x512 |
20000 |
5.7 |
23.28 |
67.08 |
69.94 |
config |
model | log |
FCN |
R-101-D8 |
512x512 |
20000 |
9.2 |
14.81 |
71.16 |
73.57 |
config |
model | log |
FCN |
R-50-D8 |
512x512 |
40000 |
- |
- |
66.97 |
69.04 |
config |
model | log |
FCN |
R-101-D8 |
512x512 |
40000 |
- |
- |
69.91 |
72.38 |
config |
model | log |
Method |
Backbone |
Crop Size |
Lr schd |
Mem (GB) |
Inf time (fps) |
mIoU |
mIoU(ms+flip) |
config |
download |
FCN |
R-101-D8 |
480x480 |
40000 |
- |
9.93 |
44.43 |
45.63 |
config |
model | log |
FCN |
R-101-D8 |
480x480 |
80000 |
- |
- |
44.13 |
45.26 |
config |
model | log |
Method |
Backbone |
Crop Size |
Lr schd |
Mem (GB) |
Inf time (fps) |
mIoU |
mIoU(ms+flip) |
config |
download |
FCN |
R-101-D8 |
480x480 |
40000 |
- |
- |
48.42 |
50.4 |
config |
model | log |
FCN |
R-101-D8 |
480x480 |
80000 |
- |
- |
49.35 |
51.38 |
config |
model | log |
Note:
FP16
means Mixed Precision (FP16) is adopted in training.
FCN D6
means dilation rate of convolution operator in FCN is 6.