Skip to content

Commit 5876868

Browse files
sixiaozhengsennnnnCuttlefishXuan
authored
[Feature] Official implementation of SETR (open-mmlab#531)
* Adjust vision transformer backbone architectures; * Add DropPath, trunc_normal_ for VisionTransformer implementation; * Add class token buring intermediate period and remove it during final period; * Fix some parameters loss bug; * * Store intermediate token features and impose no processes on them; * Remove class token and reshape entire token feature from NLC to NCHW; * Fix some doc error * Add a arg for VisionTransformer backbone to control if input class token into transformer; * Add stochastic depth decay rule for DropPath; * * Fix output bug when input_cls_token=False; * Add related unit test; * Re-implement of SETR * Add two head -- SETRUPHead (Naive, PUP) & SETRMLAHead (MLA); * * Modify some docs of heads of SETR; * Add MLA auxiliary head of SETR; * * Modify some arg of setr heads; * Add unit test for setr heads; * * Add 768x768 cityscapes dataset config; * Add Backbone: SETR -- Backbone: MLA, PUP, Naive; * Add SETR cityscapes training & testing config; * * Fix the low code coverage of unit test about heads of setr; * Remove some rebundant error capture; * * Add pascal context dataset & ade20k dataset config; * Modify auxiliary head relative config; * Modify folder structure. * add setr * modify vit * Fix the test_cfg arg position; * Fix some learning schedule bug; * optimize setr code * Add arg: final_reshape to control if converting output feature information from NLC to NCHW; * Fix the default value of final_reshape; * Modify arg: final_reshape to arg: out_shape; * Fix some unit test bug; * Add MLA neck; * Modify setr configs to add MLA neck; * Modify MLA decode head to remove rebundant structure; * Remove some rebundant files. * * Fix the code style bug; * Remove some rebundant files; * Modify some unit tests of SETR; * Ignoring CityscapesCoarseDataset and MapillaryDataset. * Fix the activation function loss bug; * Fix the img_size bug of SETR_PUP_ADE20K * * Fix the lint bug of transformers.py; * Add mla neck unit test; * Convert vit of setr out shape from NLC to NCHW. * * Modify Resize action of data pipeline; * Fix deit related bug; * Set find_unused_parameters=False for pascal context dataset; * Remove arg: find_unused_parameters which is False by default. * Error auxiliary head of PUP deit * Remove the minimal restrict of slide inference. * Modify doc string of Resize * Seperate this part of code to a new PR open-mmlab#544 * * Remove some rebundant codes; * Modify unit tests of SETR heads; * Fix the tuple in_channels of mla_deit. * Modify code style * Move detailed definition of auxiliary head into model config dict; * Add some setr config for default cityscapes.py; * Fix the doc string of SETR head; * Modify implementation of SETR Heads * Remove setr aux head and use fcn head to replace it; * Remove arg: img_size and remove last interpolate op of heads; * Rename arg: conv3x3_conv1x1 to kernel_size of SETRUPHead; * non-square input support for setr heads * Modify config argument for above commits * Remove norm_layer argument of SETRMLAHead * Add mla_align_corners for MLAModule interpolate * [Refactor]Refactor of SETRMLAHead * Modify Head implementation; * Modify Head unit test; * Modify related config file; * [Refactor]MLA Neck * Fix config bug * [Refactor]SETR Naive Head and SETR PUP Head * [Fix]Fix the lack of arg: act_cfg and arg: norm_cfg * Fix config error * Refactor of SETR MLA, Naive, PUP heads. * Modify some attribute name of SETR Heads. * Modify setr configs to adapt new vit code. * Fix trunc_normal_ bug * Parameters init adjustment. * Remove redundant doc string of SETRUPHead * Fix pretrained bug * [Fix] Fix vit init bug * Add some vit unit tests * Modify module import * Remove norm from PatchEmbed * Fix pretrain weights bug * Modify pretrained judge * Fix some gradient backward bugs. * Add some unit tests to improve code cov * Fix init_weights of setr up head * Add DropPath in FFN * Finish benchmark of SETR 1. Add benchmark information into README.MD of SETR; 2. Fix some name bugs of vit; * Remove DropPath implementation and use DropPath from mmcv. * Modify out_indices arg * Fix out_indices bug. * Remove cityscapes base dataset config. Co-authored-by: sennnnn <201730271412@mail.scut.edu.cn> Co-authored-by: CuttlefishXuan <zhaoxinxuan1997@gmail.com>
1 parent 98dd016 commit 5876868

21 files changed

+914
-101
lines changed

configs/_base_/models/setr_mla.py

+96
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
# model settings
2+
backbone_norm_cfg = dict(type='LN', eps=1e-6, requires_grad=True)
3+
norm_cfg = dict(type='SyncBN', requires_grad=True)
4+
model = dict(
5+
type='EncoderDecoder',
6+
pretrained=\
7+
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_384-b3be5167.pth', # noqa
8+
backbone=dict(
9+
type='VisionTransformer',
10+
img_size=(768, 768),
11+
patch_size=16,
12+
in_channels=3,
13+
embed_dims=1024,
14+
num_layers=24,
15+
num_heads=16,
16+
out_indices=(5, 11, 17, 23),
17+
drop_rate=0.1,
18+
norm_cfg=backbone_norm_cfg,
19+
with_cls_token=False,
20+
interpolate_mode='bilinear',
21+
),
22+
neck=dict(
23+
type='MLANeck',
24+
in_channels=[1024, 1024, 1024, 1024],
25+
out_channels=256,
26+
norm_cfg=norm_cfg,
27+
act_cfg=dict(type='ReLU'),
28+
),
29+
decode_head=dict(
30+
type='SETRMLAHead',
31+
in_channels=(256, 256, 256, 256),
32+
channels=512,
33+
in_index=(0, 1, 2, 3),
34+
dropout_ratio=0,
35+
mla_channels=128,
36+
num_classes=19,
37+
norm_cfg=norm_cfg,
38+
align_corners=False,
39+
loss_decode=dict(
40+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
41+
auxiliary_head=[
42+
dict(
43+
type='FCNHead',
44+
in_channels=256,
45+
channels=256,
46+
in_index=0,
47+
dropout_ratio=0,
48+
num_convs=0,
49+
kernel_size=1,
50+
concat_input=False,
51+
num_classes=19,
52+
align_corners=False,
53+
loss_decode=dict(
54+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
55+
dict(
56+
type='FCNHead',
57+
in_channels=256,
58+
channels=256,
59+
in_index=1,
60+
dropout_ratio=0,
61+
num_convs=0,
62+
kernel_size=1,
63+
concat_input=False,
64+
num_classes=19,
65+
align_corners=False,
66+
loss_decode=dict(
67+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
68+
dict(
69+
type='FCNHead',
70+
in_channels=256,
71+
channels=256,
72+
in_index=2,
73+
dropout_ratio=0,
74+
num_convs=0,
75+
kernel_size=1,
76+
concat_input=False,
77+
num_classes=19,
78+
align_corners=False,
79+
loss_decode=dict(
80+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
81+
dict(
82+
type='FCNHead',
83+
in_channels=256,
84+
channels=256,
85+
in_index=3,
86+
dropout_ratio=0,
87+
num_convs=0,
88+
kernel_size=1,
89+
concat_input=False,
90+
num_classes=19,
91+
align_corners=False,
92+
loss_decode=dict(
93+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
94+
],
95+
train_cfg=dict(),
96+
test_cfg=dict(mode='whole'))

configs/_base_/models/setr_naive.py

+81
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# model settings
2+
backbone_norm_cfg = dict(type='LN', eps=1e-6, requires_grad=True)
3+
norm_cfg = dict(type='SyncBN', requires_grad=True)
4+
model = dict(
5+
type='EncoderDecoder',
6+
pretrained=\
7+
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_384-b3be5167.pth', # noqa
8+
backbone=dict(
9+
type='VisionTransformer',
10+
img_size=(768, 768),
11+
patch_size=16,
12+
in_channels=3,
13+
embed_dims=1024,
14+
num_layers=24,
15+
num_heads=16,
16+
out_indices=(9, 14, 19, 23),
17+
drop_rate=0.1,
18+
norm_cfg=backbone_norm_cfg,
19+
with_cls_token=True,
20+
interpolate_mode='bilinear',
21+
),
22+
decode_head=dict(
23+
type='SETRUPHead',
24+
in_channels=1024,
25+
channels=256,
26+
in_index=3,
27+
num_classes=19,
28+
dropout_ratio=0,
29+
norm_cfg=norm_cfg,
30+
num_convs=1,
31+
up_scale=4,
32+
kernel_size=1,
33+
align_corners=False,
34+
loss_decode=dict(
35+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
36+
auxiliary_head=[
37+
dict(
38+
type='SETRUPHead',
39+
in_channels=1024,
40+
channels=256,
41+
in_index=0,
42+
num_classes=19,
43+
dropout_ratio=0,
44+
norm_cfg=norm_cfg,
45+
num_convs=1,
46+
up_scale=4,
47+
kernel_size=1,
48+
align_corners=False,
49+
loss_decode=dict(
50+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
51+
dict(
52+
type='SETRUPHead',
53+
in_channels=1024,
54+
channels=256,
55+
in_index=1,
56+
num_classes=19,
57+
dropout_ratio=0,
58+
norm_cfg=norm_cfg,
59+
num_convs=1,
60+
up_scale=4,
61+
kernel_size=1,
62+
align_corners=False,
63+
loss_decode=dict(
64+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
65+
dict(
66+
type='SETRUPHead',
67+
in_channels=1024,
68+
channels=256,
69+
in_index=2,
70+
num_classes=19,
71+
dropout_ratio=0,
72+
norm_cfg=norm_cfg,
73+
num_convs=1,
74+
up_scale=4,
75+
kernel_size=1,
76+
align_corners=False,
77+
loss_decode=dict(
78+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4))
79+
],
80+
train_cfg=dict(),
81+
test_cfg=dict(mode='whole'))

configs/_base_/models/setr_pup.py

+81
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# model settings
2+
backbone_norm_cfg = dict(type='LN', eps=1e-6, requires_grad=True)
3+
norm_cfg = dict(type='SyncBN', requires_grad=True)
4+
model = dict(
5+
type='EncoderDecoder',
6+
pretrained=\
7+
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_384-b3be5167.pth', # noqa
8+
backbone=dict(
9+
type='VisionTransformer',
10+
img_size=(768, 768),
11+
patch_size=16,
12+
in_channels=3,
13+
embed_dims=1024,
14+
num_layers=24,
15+
num_heads=16,
16+
out_indices=(9, 14, 19, 23),
17+
drop_rate=0.1,
18+
norm_cfg=backbone_norm_cfg,
19+
with_cls_token=True,
20+
interpolate_mode='bilinear',
21+
),
22+
decode_head=dict(
23+
type='SETRUPHead',
24+
in_channels=1024,
25+
channels=256,
26+
in_index=3,
27+
num_classes=19,
28+
dropout_ratio=0,
29+
norm_cfg=norm_cfg,
30+
num_convs=4,
31+
up_scale=2,
32+
kernel_size=3,
33+
align_corners=False,
34+
loss_decode=dict(
35+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
36+
auxiliary_head=[
37+
dict(
38+
type='SETRUPHead',
39+
in_channels=1024,
40+
channels=256,
41+
in_index=0,
42+
num_classes=19,
43+
dropout_ratio=0,
44+
norm_cfg=norm_cfg,
45+
num_convs=1,
46+
up_scale=4,
47+
kernel_size=3,
48+
align_corners=False,
49+
loss_decode=dict(
50+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
51+
dict(
52+
type='SETRUPHead',
53+
in_channels=1024,
54+
channels=256,
55+
in_index=1,
56+
num_classes=19,
57+
dropout_ratio=0,
58+
norm_cfg=norm_cfg,
59+
num_convs=1,
60+
up_scale=4,
61+
kernel_size=3,
62+
align_corners=False,
63+
loss_decode=dict(
64+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
65+
dict(
66+
type='SETRUPHead',
67+
in_channels=1024,
68+
channels=256,
69+
in_index=2,
70+
num_classes=19,
71+
dropout_ratio=0,
72+
norm_cfg=norm_cfg,
73+
num_convs=1,
74+
up_scale=4,
75+
kernel_size=3,
76+
align_corners=False,
77+
loss_decode=dict(
78+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
79+
],
80+
train_cfg=dict(),
81+
test_cfg=dict(mode='whole'))

configs/setr/README.md

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
2+
3+
## Introduction
4+
5+
<!-- [ALGORITHM] -->
6+
7+
```latex
8+
@article{zheng2020rethinking,
9+
title={Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers},
10+
author={Zheng, Sixiao and Lu, Jiachen and Zhao, Hengshuang and Zhu, Xiatian and Luo, Zekun and Wang, Yabiao and Fu, Yanwei and Feng, Jianfeng and Xiang, Tao and Torr, Philip HS and others},
11+
journal={arXiv preprint arXiv:2012.15840},
12+
year={2020}
13+
}
14+
```
15+
16+
## Results and models
17+
18+
### ADE20K
19+
20+
| Method | Backbone | Crop Size | Batch Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
21+
| ------ | -------- | --------- | ---------- | ------- | -------- | -------------- | ----- | ------------: | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
22+
| SETR-Naive | ViT-L | 512x512 | 16 | 160000 | 18.40 | 4.72 | 48.28 | 49.56 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/setr/setr_naive_512x512_160k_b16_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_naive_512x512_160k_b16_ade20k/setr_naive_512x512_160k_b16_ade20k_20210619_191258-061f24f5.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_naive_512x512_160k_b16_ade20k/setr_naive_512x512_160k_b16_ade20k_20210619_191258.log.json) |
23+
| SETR-PUP | ViT-L | 512x512 | 16 | 160000 | 19.54 | 4.50 | 48.24 | 49.99 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/setr/setr_pup_512x512_160k_b16_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_pup_512x512_160k_b16_ade20k/setr_pup_512x512_160k_b16_ade20k_20210619_191343-7e0ce826.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_pup_512x512_160k_b16_ade20k/setr_pup_512x512_160k_b16_ade20k_20210619_191343.log.json) |
24+
| SETR-MLA | ViT-L | 512x512 | 8 | 160000 | 10.96 | - | 47.34 | 49.05 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/setr/setr_mla_512x512_160k_b8_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_mla_512x512_160k_b8_ade20k/setr_mla_512x512_160k_b8_ade20k_20210619_191118-c6d21df0.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_mla_512x512_160k_b8_ade20k/setr_mla_512x512_160k_b8_ade20k_20210619_191118.log.json) |
25+
| SETR-MLA | ViT-L | 512x512 | 16 | 160000 | 17.30 | 5.25 | 47.54 | 49.37 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/setr/setr_mla_512x512_160k_b16_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_mla_512x512_160k_b16_ade20k/setr_mla_512x512_160k_b16_ade20k_20210619_191057-f9741de7.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/setr/setr_mla_512x512_160k_b16_ade20k/setr_mla_512x512_160k_b16_ade20k_20210619_191057.log.json) |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
_base_ = ['./setr_mla_512x512_160k_b8_ade20k.py']
2+
3+
# num_gpus: 8 -> batch_size: 16
4+
data = dict(samples_per_gpu=2)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
_base_ = [
2+
'../_base_/models/setr_mla.py', '../_base_/datasets/ade20k.py',
3+
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
4+
]
5+
norm_cfg = dict(type='SyncBN', requires_grad=True)
6+
model = dict(
7+
backbone=dict(img_size=(512, 512), drop_rate=0.),
8+
decode_head=dict(num_classes=150),
9+
auxiliary_head=[
10+
dict(
11+
type='FCNHead',
12+
in_channels=256,
13+
channels=256,
14+
in_index=0,
15+
dropout_ratio=0,
16+
norm_cfg=norm_cfg,
17+
act_cfg=dict(type='ReLU'),
18+
num_convs=0,
19+
kernel_size=1,
20+
concat_input=False,
21+
num_classes=150,
22+
align_corners=False,
23+
loss_decode=dict(
24+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
25+
dict(
26+
type='FCNHead',
27+
in_channels=256,
28+
channels=256,
29+
in_index=1,
30+
dropout_ratio=0,
31+
norm_cfg=norm_cfg,
32+
act_cfg=dict(type='ReLU'),
33+
num_convs=0,
34+
kernel_size=1,
35+
concat_input=False,
36+
num_classes=150,
37+
align_corners=False,
38+
loss_decode=dict(
39+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
40+
dict(
41+
type='FCNHead',
42+
in_channels=256,
43+
channels=256,
44+
in_index=2,
45+
dropout_ratio=0,
46+
norm_cfg=norm_cfg,
47+
act_cfg=dict(type='ReLU'),
48+
num_convs=0,
49+
kernel_size=1,
50+
concat_input=False,
51+
num_classes=150,
52+
align_corners=False,
53+
loss_decode=dict(
54+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
55+
dict(
56+
type='FCNHead',
57+
in_channels=256,
58+
channels=256,
59+
in_index=3,
60+
dropout_ratio=0,
61+
norm_cfg=norm_cfg,
62+
act_cfg=dict(type='ReLU'),
63+
num_convs=0,
64+
kernel_size=1,
65+
concat_input=False,
66+
num_classes=150,
67+
align_corners=False,
68+
loss_decode=dict(
69+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
70+
],
71+
test_cfg=dict(mode='slide', crop_size=(512, 512), stride=(341, 341)),
72+
)
73+
74+
optimizer = dict(
75+
lr=0.001,
76+
weight_decay=0.0,
77+
paramwise_cfg=dict(custom_keys={'head': dict(lr_mult=10.)}))
78+
79+
# num_gpus: 8 -> batch_size: 8
80+
data = dict(samples_per_gpu=1)

0 commit comments

Comments
 (0)