Skip to content

Commit 85450da

Browse files
authored
refactor: uniform all model names (#701)
1 parent d627dc2 commit 85450da

File tree

92 files changed

+548
-541
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

92 files changed

+548
-541
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -216,7 +216,7 @@ Currently, MindCV supports the model families listed below. More models with pre
216216
* EfficientNet (MBConvNet Family) https://arxiv.org/abs/1905.11946
217217
* EfficientNet V2 - https://arxiv.org/abs/2104.00298
218218
* GhostNet - https://arxiv.org/abs/1911.11907
219-
* GoogleNet - https://arxiv.org/abs/1409.4842
219+
* GoogLeNet - https://arxiv.org/abs/1409.4842
220220
* Inception-V3 - https://arxiv.org/abs/1512.00567
221221
* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
222222
* MNASNet - https://arxiv.org/abs/1807.11626

README_CN.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ python train.py --model=resnet50 --dataset=cifar10 \
217217
* EfficientNet (MBConvNet Family) https://arxiv.org/abs/1905.11946
218218
* EfficientNet V2 - https://arxiv.org/abs/2104.00298
219219
* GhostNet - https://arxiv.org/abs/1911.11907
220-
* GoogleNet - https://arxiv.org/abs/1409.4842
220+
* GoogLeNet - https://arxiv.org/abs/1409.4842
221221
* Inception-V3 - https://arxiv.org/abs/1512.00567
222222
* Inception-ResNet-V2 and Inception-V4 - https://arxiv.org/abs/1602.07261
223223
* MNASNet - https://arxiv.org/abs/1807.11626

RELEASE.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@
123123
`mindcv.models` now expose `num_classes` and `in_channels` as constructor arguments:
124124

125125
- Add DenseNet models and pre-trained weights
126-
- Add GoogleNet models and pre-trained weights
126+
- Add GoogLeNet models and pre-trained weights
127127
- Add Inception V3 models and pre-trained weights
128128
- Add Inception V4 models and pre-trained weights
129129
- Add MnasNet models and pre-trained weights

benchmark_results.md

+100-96
Large diffs are not rendered by default.

configs/bit/bit_resnet101_ascend.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ hflip: 0.5
1818
crop_pct: 0.875
1919

2020
# model
21-
model: 'BiTresnet101'
21+
model: 'BiT_resnet101'
2222
num_classes: 1000
2323
pretrained: False
2424
ckpt_path: ''

configs/bit/bit_resnet50_ascend.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ hflip: 0.5
1818
crop_pct: 0.875
1919

2020
# model
21-
model: 'BiTresnet50'
21+
model: 'BiT_resnet50'
2222
num_classes: 1000
2323
pretrained: False
2424
ckpt_path: ''

configs/bit/bit_resnet50x3_ascend.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ crop_pct: 0.875
2020
auto_augment: "randaug-m7-mstd0.5"
2121

2222
# model
23-
model: 'BiTresnet50x3'
23+
model: 'BiT_resnet50x3'
2424
num_classes: 1000
2525
pretrained: False
2626
ckpt_path: ''

configs/convnext/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
2525

2626
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
2727
|----------------|-----------|-----------|-----------|------------|-------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
28-
| ConvNeXt_tiny | D910x64-G | 81.91 | 95.79 | 28.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) |
29-
| ConvNeXt_small | D910x64-G | 83.40 | 96.36 | 50.22 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_small-e23008f3.ckpt) |
30-
| ConvNeXt_base | D910x64-G | 83.32 | 96.24 | 88.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_base_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_base-ee3544b8.ckpt) |
28+
| convnext_tiny | D910x64-G | 81.91 | 95.79 | 28.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) |
29+
| convnext_small | D910x64-G | 83.40 | 96.36 | 50.22 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_small-e23008f3.ckpt) |
30+
| convnext_base | D910x64-G | 83.32 | 96.24 | 88.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_base_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_base-ee3544b8.ckpt) |
3131

3232
</div>
3333

configs/convnextv2/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
2222

2323
<div align="center">
2424

25-
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
26-
|-----------------|----------|-----------|-----------|------------|----------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
27-
| ConvNeXtV2_tiny | D910x8-G | 82.43 | 95.98 | 28.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) |
25+
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
26+
|------------------|----------|-----------|-----------|------------|----------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
27+
| convnextv2_tiny | D910x8-G | 82.43 | 95.98 | 28.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) |
2828

2929
</div>
3030

configs/crossvit/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Crossvit
1+
# CrossViT
22
> [CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification](https://arxiv.org/abs/2103.14899)
33
44
## Introduction
@@ -77,7 +77,7 @@ python train.py --config configs/crossvit/crossvit_15_ascend.yaml --data_dir /pa
7777
To validate the accuracy of the trained model, you can use `validate.py` and parse the checkpoint path with `--ckpt_path`.
7878

7979
```
80-
python validate.py -c configs/crossvit/crossvit15_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
80+
python validate.py -c configs/crossvit/crossvit_15_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
8181
```
8282

8383
### Deployment

configs/crossvit/crossvit_15_ascend.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ crop_pct: 0.935
2828
ema: True
2929

3030
# model
31-
model: 'crossvit15'
31+
model: 'crossvit_15'
3232
num_classes: 1000
3333
pretrained: False
3434
ckpt_path: ''

configs/crossvit/crossvit_18_ascend.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ crop_pct: 0.935
2828
ema: True
2929

3030
# model
31-
model: 'crossvit18'
31+
model: 'crossvit_18'
3232
num_classes: 1000
3333
pretrained: False
3434
ckpt_path: ''

configs/crossvit/crossvit_9_ascend.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ color_jitter: 0.4
2727
crop_pct: 0.935
2828

2929
# model
30-
model: 'crossvit9'
30+
model: 'crossvit_9'
3131
num_classes: 1000
3232
pretrained: False
3333
ckpt_path: ''

configs/densenet/README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -37,12 +37,12 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
3737

3838
<div align="center">
3939

40-
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
41-
|--------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
42-
| densenet_121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
43-
| densenet_161 | D910x8-G | 79.09 | 94.66 | 28.90 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_161_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet161-120_5004_Ascend.ckpt) |
44-
| densenet_169 | D910x8-G | 77.26 | 93.71 | 14.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_169_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet169-120_5004_Ascend.ckpt) |
45-
| densenet_201 | D910x8-G | 78.14 | 94.08 | 20.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_201_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet201-120_5004_Ascend.ckpt) |
40+
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
41+
|-------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
42+
| densenet121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
43+
| densenet161 | D910x8-G | 79.09 | 94.66 | 28.90 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_161_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet161-120_5004_Ascend.ckpt) |
44+
| densenet169 | D910x8-G | 77.26 | 93.71 | 14.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_169_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet169-120_5004_Ascend.ckpt) |
45+
| densenet201 | D910x8-G | 78.14 | 94.08 | 20.24 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_201_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet201-120_5004_Ascend.ckpt) |
4646

4747
</div>
4848

configs/dpn/README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -32,12 +32,12 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
3232

3333
<div align="center">
3434

35-
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
36-
|-------|----------|-----------|-----------|------------|------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
37-
| dpn92 | D910x8-G | 79.46 | 94.49 | 37.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) |
38-
| dpn98 | D910x8-G | 79.94 | 94.57 | 61.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn98_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn98-119a8207.ckpt) |
39-
| dpn107 | D910x8-G | 80.05 | 94.74 | 87.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn107_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn107-7d7df07b.ckpt) |
40-
| dpn131 | D910x8-G | 80.07 | 94.72 | 79.48 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn131_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn131-47f084b3.ckpt) |
35+
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
36+
|---------|----------|-----------|-----------|------------|------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
37+
| dpn92 | D910x8-G | 79.46 | 94.49 | 37.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) |
38+
| dpn98 | D910x8-G | 79.94 | 94.57 | 61.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn98_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn98-119a8207.ckpt) |
39+
| dpn107 | D910x8-G | 80.05 | 94.74 | 87.13 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn107_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn107-7d7df07b.ckpt) |
40+
| dpn131 | D910x8-G | 80.07 | 94.72 | 79.48 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn131_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn131-47f084b3.ckpt) |
4141

4242
</div>
4343

configs/ghostnet/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,9 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
2929

3030
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
3131
|--------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|
32-
| GhostNet_050 | D910x8-G | 66.03 | 86.64 | 2.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) |
33-
| GhostNet_100 | D910x8-G | 73.78 | 91.66 | 5.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_100_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_100-bef8025a.ckpt) |
34-
| GhostNet_130 | D910x8-G | 75.50 | 92.56 | 7.39 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_130_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_130-cf4c235c.ckpt) |
32+
| ghostnet_050 | D910x8-G | 66.03 | 86.64 | 2.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) |
33+
| ghostnet_100 | D910x8-G | 73.78 | 91.66 | 5.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_100_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_100-bef8025a.ckpt) |
34+
| ghostnet_130 | D910x8-G | 75.50 | 92.56 | 7.39 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_130_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_130-cf4c235c.ckpt) |
3535

3636
</div>
3737

configs/googlenet/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ training results.[[1](#references)]
1414
<img src="https://user-images.githubusercontent.com/53842165/210749903-5ff23c0e-547f-487d-bb64-70b6e99031ea.jpg" width=180 />
1515
</p>
1616
<p align="center">
17-
<em>Figure 1. Architecture of GoogLENet [<a href="#references">1</a>] </em>
17+
<em>Figure 1. Architecture of GoogLeNet [<a href="#references">1</a>] </em>
1818
</p>
1919

2020
## Results
@@ -25,7 +25,7 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
2525

2626
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
2727
|-----------|----------|-----------|-----------|------------|---------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|
28-
| GoogLeNet | D910x8-G | 72.68 | 90.89 | 6.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) |
28+
| googlenet | D910x8-G | 72.68 | 90.89 | 6.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) |
2929

3030
</div>
3131

configs/inceptionv3/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
44
## Introduction
55

6-
InceptionV3 is an upgraded version of GoogleNet. One of the most important improvements of V3 is Factorization, which
6+
InceptionV3 is an upgraded version of GoogLeNet. One of the most important improvements of V3 is Factorization, which
77
decomposes 7x7 into two one-dimensional convolutions (1x7, 7x1), and 3x3 is the same (1x3, 3x1), such benefits, both It
88
can accelerate the calculation (excess computing power can be used to deepen the network), and can split 1 conv into 2
99
convs, which further increases the network depth and increases the nonlinearity of the network. It is also worth noting
@@ -26,7 +26,7 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
2626

2727
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
2828
|--------------|----------|-----------|-----------|------------|---------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|
29-
| Inception_v3 | D910x8-G | 79.11 | 94.40 | 27.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) |
29+
| inception_v3 | D910x8-G | 79.11 | 94.40 | 27.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) |
3030

3131
</div>
3232

configs/inceptionv4/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
2323

2424
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
2525
|--------------|----------|-----------|-----------|------------|---------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|
26-
| Inception_v4 | D910x8-G | 80.88 | 95.34 | 42.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) |
26+
| inception_v4 | D910x8-G | 80.88 | 95.34 | 42.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) |
2727

2828
</div>
2929

0 commit comments

Comments
 (0)