Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] fix fast scnn #606

Merged
merged 14 commits into from
Jul 2, 2021
Merged

[Fix] fix fast scnn #606

merged 14 commits into from
Jul 2, 2021

Conversation

xiexinch
Copy link
Collaborator

@xiexinch xiexinch commented Jun 18, 2021

Motivation

Train with current fast scnn config will get low performance, we may fix the config and code.

Modification

  • Set the loss_weight of decode head from 0.4 to 1, then the training result is correct.
  • For improving the performance, we match our network code with paddleseg. The key modification is that we change conv 1x1 to conv 3x3 with padding 1.
  • Add a param dw_act_cfg at the backbone __init__ function for customizing depthwise ConvModule.

Experiments:

iters mIoU
160k 70.96

@codecov
Copy link

codecov bot commented Jun 18, 2021

Codecov Report

Merging #606 (2b4e1bd) into master (0997224) will increase coverage by 0.00%.
The diff coverage is 100.00%.

❗ Current head 2b4e1bd differs from pull request most recent head 17886fb. Consider uploading reports for the commit 17886fb to get more accurate results
Impacted file tree graph

@@           Coverage Diff           @@
##           master     #606   +/-   ##
=======================================
  Coverage   85.17%   85.18%           
=======================================
  Files         105      105           
  Lines        5668     5670    +2     
  Branches      923      923           
=======================================
+ Hits         4828     4830    +2     
  Misses        662      662           
  Partials      178      178           
Flag Coverage Δ
unittests 85.16% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmseg/models/decode_heads/psp_head.py 100.00% <ø> (ø)
mmseg/models/utils/inverted_residual.py 100.00% <ø> (ø)
mmseg/models/backbones/fast_scnn.py 97.08% <100.00%> (+0.05%) ⬆️
mmseg/models/decode_heads/sep_fcn_head.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0997224...17886fb. Read the comment docs.

return nn.Sequential(*layers)

def forward(self, x):
x = self.bottleneck1(x)
x = self.bottleneck2(x)
x = self.bottleneck3(x)
x = torch.cat([x, *self.ppm(x)], dim=1)
x = torch.cat([x, *self.ppm(x)[::-1]], dim=1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[::-1] is not necessary if we train from scratch.

@@ -23,6 +23,7 @@
img_scale=(2048, 1024),
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
flip=False,
# flip=True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete # flip=True

]

# Re-config the data sampler.
data = dict(samples_per_gpu=4, workers_per_gpu=4)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use 8 GPUs and 4 batch size per GPU, please rename config name 4x8 to 8x4

]

# Re-config the data sampler.
data = dict(samples_per_gpu=4, workers_per_gpu=4)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use 8 GPUs and 4 batch size per GPU, please rename config name 4x8 to 8x4

self.dsconv1 = DepthwiseSeparableConvModule(
dw_channels1,
dw_channels2,
kernel_size=3,
stride=2,
padding=1,
norm_cfg=self.norm_cfg)
norm_cfg=self.norm_cfg,
dw_act_cfg=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have an ablation study for dw_act_cfg.

self.dsconv2 = DepthwiseSeparableConvModule(
dw_channels2,
out_channels,
kernel_size=3,
stride=2,
padding=1,
norm_cfg=self.norm_cfg)
norm_cfg=self.norm_cfg,
dw_act_cfg=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have an ablation study for dw_act_cfg.

@@ -136,11 +141,13 @@ def __init__(self,
conv_cfg=self.conv_cfg,
norm_cfg=self.norm_cfg,
act_cfg=self.act_cfg,
align_corners=align_corners)
align_corners=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

align_corners=True --> align_corners=align_corners
We can also have an ablation study for align_corners.

Copy link
Collaborator

@Junjun2016 Junjun2016 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have an ablation study for DepthwiseSeparableConvModule, one with activation after depthwise convolution and one without.

Copy link
Collaborator

@Junjun2016 Junjun2016 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the results of our original model with more iteration (160k and 320k).

@xiexinch
Copy link
Collaborator Author

xiexinch commented Jun 22, 2021

What are the results of our original model with more iteration (160k and 320k).

Training results of master branch model:

iters mIoU
160k 69.69
320k 71.17

@xvjiarui
Copy link
Collaborator

We may describe what we changed in the PR message.
And update the link of the models.

@xiexinch xiexinch changed the title [Fix] fix fast scnn training config [Fix] fix fast scnn Jul 2, 2021
@@ -1,10 +1,10 @@
_base_ = [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fast_scnn_8x4_160k_lr0.12_cityscapes.py --> fast_scnn_lr0.12_8x4_160k_cityscapes.py

@@ -37,7 +37,8 @@ def __init__(self,
conv_cfg=None,
norm_cfg=dict(type='BN'),
act_cfg=dict(type='ReLU6'),
with_cp=False):
with_cp=False,
**kwards):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**kwards):
**kwargs):

@@ -55,7 +56,8 @@ def __init__(self,
kernel_size=1,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
act_cfg=act_cfg))
act_cfg=act_cfg,
**kwards))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**kwards))
**kwargs))

@@ -67,14 +69,16 @@ def __init__(self,
groups=hidden_dim,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
act_cfg=act_cfg),
act_cfg=act_cfg,
**kwards),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**kwards),
**kwargs),

ConvModule(
in_channels=hidden_dim,
out_channels=out_channels,
kernel_size=1,
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
act_cfg=None)
act_cfg=None,
**kwards)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**kwards)
**kwargs)

@@ -22,7 +22,7 @@ class PPM(nn.ModuleList):
"""

def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg,
act_cfg, align_corners):
act_cfg, align_corners, **kwards):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
act_cfg, align_corners, **kwards):
act_cfg, align_corners, **kwargs):

@@ -41,7 +41,8 @@ def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg,
1,
conv_cfg=self.conv_cfg,
norm_cfg=self.norm_cfg,
act_cfg=self.act_cfg)))
act_cfg=self.act_cfg,
**kwards)))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**kwards)))
**kwargs)))

for i in range(1, self.num_convs):
self.convs[i] = DepthwiseSeparableConvModule(
self.channels,
self.channels,
kernel_size=self.kernel_size,
padding=self.kernel_size // 2,
norm_cfg=self.norm_cfg)
norm_cfg=self.norm_cfg,
dw_act_cfg=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dw_act_cfg=None)
dw_act_cfg=dw_act_cfg)


if self.concat_input:
self.conv_cat = DepthwiseSeparableConvModule(
self.in_channels + self.channels,
self.channels,
kernel_size=self.kernel_size,
padding=self.kernel_size // 2,
norm_cfg=self.norm_cfg)
norm_cfg=self.norm_cfg,
dw_act_cfg=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dw_act_cfg=None)
dw_act_cfg=dw_act_cfg)

@@ -290,6 +309,8 @@ class FastSCNN(BaseModule):
dict(type='ReLU')
align_corners (bool): align_corners argument of F.interpolate.
Default: False
dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'default' is a string value at DepthwiseSeparableConvModule. In LearningToDownsample module, I want to set it to None.

@open-mmlab open-mmlab deleted a comment from Junjun2016 Jul 2, 2021
@Junjun2016 Junjun2016 merged commit 7e1d853 into open-mmlab:master Jul 2, 2021
bowenroom pushed a commit to bowenroom/mmsegmentation that referenced this pull request Feb 25, 2022
* [Refactor] Match paddle seg weight

* Match inference

* fix exp setting

* delete comment and rename config files

* replace hard code with config parameters

* fix ppm concat order

* remove hardcode

* update result

* fix typo

* complement docstring

* complement FutureFusionModule docstring

* modify log link
sibozhang pushed a commit to sibozhang/mmsegmentation that referenced this pull request Mar 22, 2024
* polish README description

* polish

* Update README_cn.md
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants