Skip to content

Commit 25f5e62

Browse files
author
ChongWei905
committed
docs: fix readme bugs
1 parent 7bc4a44 commit 25f5e62

File tree

54 files changed

+1359
-1647
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+1359
-1647
lines changed

benchmark_results.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -114,5 +114,5 @@
114114

115115
</details>
116116

117-
#### Notes
118-
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
117+
### Notes
118+
- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.

configs/README.md

+11-11
Original file line numberDiff line numberDiff line change
@@ -31,24 +31,24 @@ Please follow the outline structure and **table format** shown in [densenet/READ
3131

3232
#### Table Format
3333

34-
<div align="center">
34+
3535

3636
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
3737
| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
3838
| densenet121 | 8.06 | 8 | 32 | 224x224 | O2 | 300s | 47,34 | 5446.81 | 75.67 | 92.77 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) |
3939

40-
</div>
40+
4141

4242
Illustration:
43-
- Model: model name in lower case with _ seperator.
44-
- Top-1 and Top-5: Accuracy reported on the validatoin set of ImageNet-1K. Keep 2 digits after the decimal point.
45-
- Params (M): # of model parameters in millions (10^6). Keep **2 digits** after the decimal point
46-
- Batch Size: Training batch size
47-
- Cards: # of cards
48-
- Ms/step: Time used on training per step in ms
49-
- Jit_level: Jit level of mindspore context, which contains 3 levels: O0/O1/O2
50-
- Recipe: Training recipe/configuration linked to a yaml config file.
51-
- Download: url of the pretrained model weights
43+
- model name: model name in lower case with _ seperator.
44+
- top-1 and top-5: Accuracy reported on the validatoin set of ImageNet-1K. Keep 2 digits after the decimal point.
45+
- params(M): # of model parameters in millions (10^6). Keep **2 digits** after the decimal point
46+
- batch size: Training batch size
47+
- cards: # of cards
48+
- ms/step: Time used on training per step in ms
49+
- jit level: Jit level of mindspore context, which contains 3 levels: O0/O1/O2
50+
- recipe: Training recipe/configuration linked to a yaml config file.
51+
- weight: url of the pretrained model weights
5252

5353
### Model Checkpoint Format
5454
The checkpoint (i.e., model weight) name should follow this format: **{model_name}_{specification}-{sha256sum}.ckpt**, e.g., `poolformer_s12-5be5c4e4.ckpt`.

configs/bit/README.md

+24-28
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@
22

33
> [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370)
44
5-
## Requirements
6-
| mindspore | ascend driver | firmware | cann toolkit/kernel |
7-
| :-------: | :-----------: | :---------: | :-----------------: |
8-
| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
95

106
## Introduction
117

@@ -17,30 +13,10 @@ is required. 3) Long pre-training time: Pretraining on a larger dataset requires
1713
BiT use GroupNorm combined with Weight Standardisation instead of BatchNorm. Since BatchNorm performs worse when the number of images on each accelerator is
1814
too low. 5) With BiT fine-tuning, good performance can be achieved even if there are only a few examples of each type on natural images.[[1, 2](#References)]
1915

20-
21-
## Performance
22-
23-
Our reproduced model performance on ImageNet-1K is reported as follows.
24-
25-
- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
26-
27-
*coming soon*
28-
29-
- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
30-
31-
32-
<div align="center">
33-
34-
35-
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
36-
| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
37-
| bit_resnet50 | 25.55 | 8 | 32 | 224x224 | O2 | 146s | 74.52 | 3413.33 | 76.81 | 93.17 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |
38-
39-
40-
</div>
41-
42-
#### Notes
43-
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
16+
## Requirements
17+
| mindspore | ascend driver | firmware | cann toolkit/kernel |
18+
| :-------: | :-----------: | :---------: | :-----------------: |
19+
| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
4420

4521
## Quick Start
4622

@@ -87,6 +63,26 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
8763
python validate.py -c configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
8864
```
8965

66+
## Performance
67+
68+
Our reproduced model performance on ImageNet-1K is reported as follows.
69+
70+
Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
71+
72+
*coming soon*
73+
74+
Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
75+
76+
77+
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
78+
| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
79+
| bit_resnet50 | 25.55 | 8 | 32 | 224x224 | O2 | 146s | 74.52 | 3413.33 | 76.81 | 93.17 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |
80+
81+
82+
83+
### Notes
84+
- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
85+
9086
## References
9187

9288
<!--- Guideline: Citation format should follow GB/T 7714. -->

configs/cmt/README.md

+21-26
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@
22

33
> [CMT: Convolutional Neural Networks Meet Vision Transformers](https://arxiv.org/abs/2107.06263)
44
5-
## Requirements
6-
| mindspore | ascend driver | firmware | cann toolkit/kernel |
7-
| :-------: | :-----------: | :---------: | :-----------------: |
8-
| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
95

106
## Introduction
117

@@ -14,29 +10,11 @@ dependencies and extract local information. In addition, to reduce computation c
1410
and depthwise convolution and pointwise convolution like MobileNet. By combing these parts, CMT could get a SOTA performance
1511
on ImageNet-1K dataset.
1612

13+
## Requirements
14+
| mindspore | ascend driver | firmware | cann toolkit/kernel |
15+
| :-------: | :-----------: | :---------: | :-----------------: |
16+
| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
1717

18-
## Performance
19-
20-
Our reproduced model performance on ImageNet-1K is reported as follows.
21-
22-
- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
23-
24-
*coming soon*
25-
26-
- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
27-
28-
<div align="center">
29-
30-
31-
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
32-
| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
33-
| cmt_small | 26.09 | 8 | 128 | 224x224 | O2 | 1268s | 500.64 | 2048.01 | 83.24 | 96.41 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |
34-
35-
36-
</div>
37-
38-
#### Notes
39-
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
4018

4119
## Quick Start
4220

@@ -83,6 +61,23 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
8361
python validate.py -c configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
8462
```
8563

64+
## Performance
65+
66+
Our reproduced model performance on ImageNet-1K is reported as follows.
67+
68+
Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
69+
70+
*coming soon*
71+
72+
Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
73+
74+
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
75+
| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
76+
| cmt_small | 26.09 | 8 | 128 | 224x224 | O2 | 1268s | 500.64 | 2048.01 | 83.24 | 96.41 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |
77+
78+
### Notes
79+
- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
80+
8681
## References
8782

8883
<!--- Guideline: Citation format should follow GB/T 7714. -->

configs/coat/README.md

+28-26
Original file line numberDiff line numberDiff line change
@@ -2,37 +2,15 @@
22

33
> [Co-Scale Conv-Attentional Image Transformers](https://arxiv.org/abs/2104.06399v2)
44
5-
## Requirements
6-
| mindspore | ascend driver | firmware | cann toolkit/kernel |
7-
| :-------: | :-----------: | :---------: | :-----------------: |
8-
| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
9-
105
## Introduction
116

127
Co-Scale Conv-Attentional Image Transformer (CoaT) is a Transformer-based image classifier equipped with co-scale and conv-attentional mechanisms. First, the co-scale mechanism maintains the integrity of Transformers' encoder branches at individual scales, while allowing representations learned at different scales to effectively communicate with each other. Second, the conv-attentional mechanism is designed by realizing a relative position embedding formulation in the factorized attention module with an efficient convolution-like implementation. CoaT empowers image Transformers with enriched multi-scale and contextual modeling capabilities.
138

14-
## Performance
15-
16-
Our reproduced model performance on ImageNet-1K is reported as follows.
17-
18-
- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
19-
20-
*coming soon*
21-
22-
23-
- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
24-
25-
<div align="center">
26-
27-
28-
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
29-
| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
30-
| coat_tiny | 5.50 | 8 | 32 | 224x224 | O2 | 543s | 254.95 | 1003.92 | 79.67 | 94.88 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |
31-
32-
</div>
9+
## Requirements
10+
| mindspore | ascend driver | firmware | cann toolkit/kernel |
11+
| :-------: | :-----------: | :---------: | :-----------------: |
12+
| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
3313

34-
#### Notes
35-
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
3614

3715

3816
## Quick Start
@@ -79,6 +57,30 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
7957
python validate.py -c configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
8058
```
8159

60+
## Performance
61+
62+
Our reproduced model performance on ImageNet-1K is reported as follows.
63+
64+
Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
65+
66+
*coming soon*
67+
68+
69+
Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
70+
71+
72+
73+
74+
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
75+
| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
76+
| coat_tiny | 5.50 | 8 | 32 | 224x224 | O2 | 543s | 254.95 | 1003.92 | 79.67 | 94.88 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |
77+
78+
79+
80+
### Notes
81+
- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
82+
83+
8284
## References
8385

8486
[1] Han D, Yun S, Heo B, et al. Rethinking channel dimensions for efficient model design[C]//Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2021: 732-741.

0 commit comments

Comments
 (0)