-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] fix fast scnn #606
[Fix] fix fast scnn #606
Conversation
Codecov Report
@@ Coverage Diff @@
## master #606 +/- ##
=======================================
Coverage 85.17% 85.18%
=======================================
Files 105 105
Lines 5668 5670 +2
Branches 923 923
=======================================
+ Hits 4828 4830 +2
Misses 662 662
Partials 178 178
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
mmseg/models/backbones/fast_scnn.py
Outdated
return nn.Sequential(*layers) | ||
|
||
def forward(self, x): | ||
x = self.bottleneck1(x) | ||
x = self.bottleneck2(x) | ||
x = self.bottleneck3(x) | ||
x = torch.cat([x, *self.ppm(x)], dim=1) | ||
x = torch.cat([x, *self.ppm(x)[::-1]], dim=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[::-1]
is not necessary if we train from scratch.
@@ -23,6 +23,7 @@ | |||
img_scale=(2048, 1024), | |||
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], | |||
flip=False, | |||
# flip=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete # flip=True
] | ||
|
||
# Re-config the data sampler. | ||
data = dict(samples_per_gpu=4, workers_per_gpu=4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we use 8 GPUs and 4 batch size per GPU, please rename config name 4x8
to 8x4
] | ||
|
||
# Re-config the data sampler. | ||
data = dict(samples_per_gpu=4, workers_per_gpu=4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we use 8 GPUs and 4 batch size per GPU, please rename config name 4x8 to 8x4
mmseg/models/backbones/fast_scnn.py
Outdated
self.dsconv1 = DepthwiseSeparableConvModule( | ||
dw_channels1, | ||
dw_channels2, | ||
kernel_size=3, | ||
stride=2, | ||
padding=1, | ||
norm_cfg=self.norm_cfg) | ||
norm_cfg=self.norm_cfg, | ||
dw_act_cfg=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can have an ablation study for dw_act_cfg
.
mmseg/models/backbones/fast_scnn.py
Outdated
self.dsconv2 = DepthwiseSeparableConvModule( | ||
dw_channels2, | ||
out_channels, | ||
kernel_size=3, | ||
stride=2, | ||
padding=1, | ||
norm_cfg=self.norm_cfg) | ||
norm_cfg=self.norm_cfg, | ||
dw_act_cfg=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can have an ablation study for dw_act_cfg.
mmseg/models/backbones/fast_scnn.py
Outdated
@@ -136,11 +141,13 @@ def __init__(self, | |||
conv_cfg=self.conv_cfg, | |||
norm_cfg=self.norm_cfg, | |||
act_cfg=self.act_cfg, | |||
align_corners=align_corners) | |||
align_corners=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
align_corners=True
--> align_corners=align_corners
We can also have an ablation study for align_corners
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have an ablation study for DepthwiseSeparableConvModule
, one with activation
after depthwise convolution and one without.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the results of our original model with more iteration (160k and 320k).
Training results of master branch model:
|
We may describe what we changed in the PR message. |
@@ -1,10 +1,10 @@ | |||
_base_ = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fast_scnn_8x4_160k_lr0.12_cityscapes.py --> fast_scnn_lr0.12_8x4_160k_cityscapes.py
@@ -37,7 +37,8 @@ def __init__(self, | |||
conv_cfg=None, | |||
norm_cfg=dict(type='BN'), | |||
act_cfg=dict(type='ReLU6'), | |||
with_cp=False): | |||
with_cp=False, | |||
**kwards): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**kwards): | |
**kwargs): |
@@ -55,7 +56,8 @@ def __init__(self, | |||
kernel_size=1, | |||
conv_cfg=conv_cfg, | |||
norm_cfg=norm_cfg, | |||
act_cfg=act_cfg)) | |||
act_cfg=act_cfg, | |||
**kwards)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**kwards)) | |
**kwargs)) |
@@ -67,14 +69,16 @@ def __init__(self, | |||
groups=hidden_dim, | |||
conv_cfg=conv_cfg, | |||
norm_cfg=norm_cfg, | |||
act_cfg=act_cfg), | |||
act_cfg=act_cfg, | |||
**kwards), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**kwards), | |
**kwargs), |
ConvModule( | ||
in_channels=hidden_dim, | ||
out_channels=out_channels, | ||
kernel_size=1, | ||
conv_cfg=conv_cfg, | ||
norm_cfg=norm_cfg, | ||
act_cfg=None) | ||
act_cfg=None, | ||
**kwards) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**kwards) | |
**kwargs) |
@@ -22,7 +22,7 @@ class PPM(nn.ModuleList): | |||
""" | |||
|
|||
def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, | |||
act_cfg, align_corners): | |||
act_cfg, align_corners, **kwards): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
act_cfg, align_corners, **kwards): | |
act_cfg, align_corners, **kwargs): |
@@ -41,7 +41,8 @@ def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, | |||
1, | |||
conv_cfg=self.conv_cfg, | |||
norm_cfg=self.norm_cfg, | |||
act_cfg=self.act_cfg))) | |||
act_cfg=self.act_cfg, | |||
**kwards))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**kwards))) | |
**kwargs))) |
for i in range(1, self.num_convs): | ||
self.convs[i] = DepthwiseSeparableConvModule( | ||
self.channels, | ||
self.channels, | ||
kernel_size=self.kernel_size, | ||
padding=self.kernel_size // 2, | ||
norm_cfg=self.norm_cfg) | ||
norm_cfg=self.norm_cfg, | ||
dw_act_cfg=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dw_act_cfg=None) | |
dw_act_cfg=dw_act_cfg) |
|
||
if self.concat_input: | ||
self.conv_cat = DepthwiseSeparableConvModule( | ||
self.in_channels + self.channels, | ||
self.channels, | ||
kernel_size=self.kernel_size, | ||
padding=self.kernel_size // 2, | ||
norm_cfg=self.norm_cfg) | ||
norm_cfg=self.norm_cfg, | ||
dw_act_cfg=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dw_act_cfg=None) | |
dw_act_cfg=dw_act_cfg) |
mmseg/models/backbones/fast_scnn.py
Outdated
@@ -290,6 +309,8 @@ class FastSCNN(BaseModule): | |||
dict(type='ReLU') | |||
align_corners (bool): align_corners argument of F.interpolate. | |||
Default: False | |||
dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'default'
is a string value at DepthwiseSeparableConvModule
. In LearningToDownsample
module, I want to set it to None
.
* [Refactor] Match paddle seg weight * Match inference * fix exp setting * delete comment and rename config files * replace hard code with config parameters * fix ppm concat order * remove hardcode * update result * fix typo * complement docstring * complement FutureFusionModule docstring * modify log link
* polish README description * polish * Update README_cn.md
Motivation
Train with current fast scnn config will get low performance, we may fix the config and code.
Modification
loss_weight
of decode head from 0.4 to 1, then the training result is correct.conv 1x1
toconv 3x3 with padding 1
.dw_act_cfg
at the backbone__init__
function for customizingdepthwise ConvModule
.Experiments: