Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Provide URLs of STDC, Segmenter and Twins pretrained models #1357

Merged
merged 1 commit into from
Mar 9, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion configs/_base_/models/segmenter_vit-b16_mask.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_base_p16_384_20220308-96dfe169.pth' # noqa
# model settings
backbone_norm_cfg = dict(type='LN', eps=1e-6, requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='pretrain/vit_base_p16_384.pth',
pretrained=checkpoint,
backbone=dict(
type='VisionTransformer',
img_size=(512, 512),
Expand Down
5 changes: 3 additions & 2 deletions configs/_base_/models/twins_pcpvt-s_fpn.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_small_20220308-e638c41c.pth' # noqa

# model settings
backbone_norm_cfg = dict(type='LN')
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='PCPVT',
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/pcpvt_small.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
in_channels=3,
embed_dims=[64, 128, 320, 512],
num_heads=[1, 2, 5, 8],
Expand Down
5 changes: 3 additions & 2 deletions configs/_base_/models/twins_pcpvt-s_upernet.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_small_20220308-e638c41c.pth' # noqa

# model settings
backbone_norm_cfg = dict(type='LN')
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='PCPVT',
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/pcpvt_small.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
in_channels=3,
embed_dims=[64, 128, 320, 512],
num_heads=[1, 2, 5, 8],
Expand Down
4 changes: 2 additions & 2 deletions configs/segmenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@ Image segmentation is often ambiguous at the level of individual image patches a

## Usage

To use the pre-trained ViT model from [Segmenter](https://github.com/rstrudel/segmenter), it is necessary to convert keys.
We have provided pretrained models converted from [ViT-AugReg](https://github.com/rwightman/pytorch-image-models/blob/f55c22bebf9d8afc449d317a723231ef72e0d662/timm/models/vision_transformer.py#L54-L106).

We provide a script [`vitjax2mmseg.py`](../../tools/model_converters/vitjax2mmseg.py) in the tools directory to convert the key of models from [ViT-AugReg](https://github.com/rwightman/pytorch-image-models/blob/f55c22bebf9d8afc449d317a723231ef72e0d662/timm/models/vision_transformer.py#L54-L106) to MMSegmentation style.
If you want to convert keys on your own to use the pre-trained ViT model from [Segmenter](https://github.com/rstrudel/segmenter), we also provide a script [`vitjax2mmseg.py`](../../tools/model_converters/vitjax2mmseg.py) in the tools directory to convert the key of models from [ViT-AugReg](https://github.com/rwightman/pytorch-image-models/blob/f55c22bebf9d8afc449d317a723231ef72e0d662/timm/models/vision_transformer.py#L54-L106) to MMSegmentation style.

```shell
python tools/model_converters/vitjax2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_large_p16_384_20220308-d4efb41d.pth' # noqa

model = dict(
pretrained='pretrain/vit_large_p16_384.pth',
pretrained=checkpoint,
backbone=dict(
type='VisionTransformer',
img_size=(640, 640),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,11 @@
'../_base_/schedules/schedule_160k.py'
]

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_small_p16_384_20220308-410f6037.pth' # noqa

backbone_norm_cfg = dict(type='LN', eps=1e-6, requires_grad=True)
model = dict(
pretrained='pretrain/vit_small_p16_384.pth',
pretrained=checkpoint,
backbone=dict(
img_size=(512, 512),
embed_dims=384,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@
'../_base_/schedules/schedule_160k.py'
]

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_tiny_p16_384_20220308-cce8c795.pth' # noqa

model = dict(
pretrained='pretrain/vit_tiny_p16_384.pth',
pretrained=checkpoint,
backbone=dict(embed_dims=192, num_heads=3),
decode_head=dict(
type='SegmenterMaskTransformerHead',
Expand Down
4 changes: 2 additions & 2 deletions configs/stdc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ BiSeNet has been proved to be a popular two-stream network for real-time segment

## Usage

To use original repositories' [ImageNet Pretrained STDCNet Weights](https://drive.google.com/drive/folders/1wROFwRt8qWHD4jSo8Zu1gp1d6oYJ3ns1) , it is necessary to convert keys.
We have provided [ImageNet Pretrained STDCNet Weights](https://drive.google.com/drive/folders/1wROFwRt8qWHD4jSo8Zu1gp1d6oYJ3ns1) models converted from [official repo](https://github.com/MichaelFan01/STDC-Seg).

We provide a script [`stdc2mmseg.py`](../../tools/model_converters/stdc2mmseg.py) in the tools directory to convert the key of models from [the official repo](https://github.com/MichaelFan01/STDC-Seg) to MMSegmentation style.
If you want to convert keys on your own to use official repositories' pre-trained models, we also provide a script [`stdc2mmseg.py`](../../tools/model_converters/stdc2mmseg.py) in the tools directory to convert the key of models from [the official repo](https://github.com/MichaelFan01/STDC-Seg) to MMSegmentation style.

```shell
python tools/model_converters/stdc2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH} ${STDC_TYPE}
Expand Down
4 changes: 2 additions & 2 deletions configs/stdc/stdc1_in1k-pre_512x1024_80k_cityscapes.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/stdc/stdc1_20220308-5368626c.pth' # noqa
_base_ = './stdc1_512x1024_80k_cityscapes.py'
model = dict(
backbone=dict(
backbone_cfg=dict(
init_cfg=dict(
type='Pretrained', checkpoint='./pretrained/stdc1.pth'))))
init_cfg=dict(type='Pretrained', checkpoint=checkpoint))))
4 changes: 2 additions & 2 deletions configs/stdc/stdc2_in1k-pre_512x1024_80k_cityscapes.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/stdc/stdc2_20220308-7dbd9127.pth' # noqa
_base_ = './stdc2_512x1024_80k_cityscapes.py'
model = dict(
backbone=dict(
backbone_cfg=dict(
init_cfg=dict(
type='Pretrained', checkpoint='./pretrained/stdc2.pth'))))
init_cfg=dict(type='Pretrained', checkpoint=checkpoint))))
4 changes: 2 additions & 2 deletions configs/twins/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,9 @@ Very recently, a variety of vision transformer architectures for dense predictio

## Usage

To use other repositories' pre-trained models, it is necessary to convert keys.
We have provided pretrained models converted from [official repo](https://github.com/Meituan-AutoML/Twins).

We provide a script [`twins2mmseg.py`](../../tools/model_converters/twins2mmseg.py) in the tools directory to convert the key of models from [the official repo](https://github.com/Meituan-AutoML/Twins) to MMSegmentation style.
If you want to convert keys on your own to use official repositories' pre-trained models, we also provide a script [`twins2mmseg.py`](../../tools/model_converters/twins2mmseg.py) in the tools directory to convert the key of models from [the official repo](https://github.com/Meituan-AutoML/Twins) to MMSegmentation style.

```shell
python tools/model_converters/twins2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH} ${MODEL_TYPE}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
_base_ = ['./twins_pcpvt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_base_20220308-0621964c.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/pcpvt_base.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
depths=[3, 4, 18, 3]), )
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
_base_ = ['./twins_pcpvt-s_uperhead_8x4_512x512_160k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_base_20220308-0621964c.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/pcpvt_base.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
depths=[3, 4, 18, 3],
drop_path_rate=0.3))

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
_base_ = ['./twins_pcpvt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_large_20220308-37579dc6.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/pcpvt_large.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
depths=[3, 8, 27, 3]))
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
_base_ = ['./twins_pcpvt-s_uperhead_8x4_512x512_160k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_large_20220308-37579dc6.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/pcpvt_large.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
depths=[3, 8, 27, 3],
drop_path_rate=0.3))

Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
_base_ = ['./twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_base_20220308-1b7eb711.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/alt_gvt_base.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[96, 192, 384, 768],
num_heads=[3, 6, 12, 24],
depths=[2, 2, 18, 2]),
Expand Down
6 changes: 4 additions & 2 deletions configs/twins/twins_svt-b_uperhead_8x2_512x512_160k_ade20k.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
_base_ = ['./twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_base_20220308-1b7eb711.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/alt_gvt_base.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[96, 192, 384, 768],
num_heads=[3, 6, 12, 24],
depths=[2, 2, 18, 2]),
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
_base_ = ['./twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_large_20220308-fb5936f3.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/alt_gvt_large.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[128, 256, 512, 1024],
num_heads=[4, 8, 16, 32],
depths=[2, 2, 18, 2],
Expand Down
6 changes: 4 additions & 2 deletions configs/twins/twins_svt-l_uperhead_8x2_512x512_160k_ade20k.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
_base_ = ['./twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py']

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_large_20220308-fb5936f3.pth' # noqa

model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/alt_gvt_large.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[128, 256, 512, 1024],
num_heads=[4, 8, 16, 32],
depths=[2, 2, 18, 2],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,13 @@
'../_base_/models/twins_pcpvt-s_fpn.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_small_20220308-7e1c3695.pth' # noqa

model = dict(
backbone=dict(
type='SVT',
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/alt_gvt_small.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[64, 128, 256, 512],
num_heads=[2, 4, 8, 16],
mlp_ratios=[4, 4, 4, 4],
Expand Down
6 changes: 4 additions & 2 deletions configs/twins/twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,13 @@
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]

checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_small_20220308-7e1c3695.pth' # noqa

model = dict(
backbone=dict(
type='SVT',
init_cfg=dict(
type='Pretrained', checkpoint='pretrained/alt_gvt_small.pth'),
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[64, 128, 256, 512],
num_heads=[2, 4, 8, 16],
mlp_ratios=[4, 4, 4, 4],
Expand Down