-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] Fix some vit init bugs #609
Conversation
Codecov Report
@@ Coverage Diff @@
## master #609 +/- ##
==========================================
+ Coverage 85.45% 85.57% +0.11%
==========================================
Files 101 101
Lines 5220 5226 +6
Branches 840 842 +2
==========================================
+ Hits 4461 4472 +11
+ Misses 586 583 -3
+ Partials 173 171 -2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
mmseg/models/backbones/vit.py
Outdated
@@ -325,7 +338,8 @@ def init_weights(self, pretrained=None): | |||
|
|||
self.load_state_dict(state_dict, False) | |||
|
|||
elif pretrained is None: | |||
elif isinstance(self.pretrained, type(None)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is None
mmseg/models/backbones/vit.py
Outdated
@@ -185,8 +179,12 @@ class VisionTransformer(BaseModule): | |||
Default: dict(type='LN') | |||
act_cfg (dict): The activation config for FFNs. | |||
Defalut: dict(type='GELU'). | |||
final_norm (bool): Whether to add a additional layer to normalize | |||
first_norm (bool): Whether to add a norm in PatchEmbed Block. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
first_norm (bool): Whether to add a norm in PatchEmbed Block. | |
patch_norm (bool): Whether to add a norm in PatchEmbed Block. |
mmseg/models/backbones/vit.py
Outdated
self.patch_embed = PatchEmbed( | ||
img_size=img_size, | ||
patch_size=patch_size, | ||
in_channels=in_channels, | ||
embed_dim=embed_dims, | ||
norm_cfg=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.patch_embed = PatchEmbed( | |
img_size=img_size, | |
patch_size=patch_size, | |
in_channels=in_channels, | |
embed_dim=embed_dims, | |
norm_cfg=None) | |
self.patch_embed = PatchEmbed( | |
img_size=img_size, | |
patch_size=patch_size, | |
in_channels=in_channels, | |
embed_dim=embed_dims, | |
norm_cfg=norm_cfg if patch_norm else None) |
* [Fix] Fix vit init bug * Add some vit unit tests * Modify module import * Fix pretrain weights bug * Modify pretrained judge * Add some unit tests to improve code cov * Optimize code * Fix vit unit test
* add bottom up nms * add bottom up nms * use_nms default True * fix .md * increase code coverage * increase code coverage * increase code coverage * delete unnecessary 'list' * test pre-commit * update variable names & ReadMe * clarify pose-nms-thr in the help message * increase code coverage * increase code coverage
Fix some new style parameters init bug for vit.
Add some unit test for vit to improve code cov.