We currenent release the code and models for:
- Top-down
01/18/2022
[Initial commits]:
- Models with Top-down.
The followed models and logs can be downloaded on Google Drive: total_models, total_logs.
We also release the models on Baidu Cloud: total_models (lqht), total_logs (j7e2).
- All the models are pretrained on ImageNet-1K without Token Labeling and Layer Scale. Reason can be found in issue #12.
Backbone | Input Size | AP | AP50 | AP75 | ARM | ARL | AR | FLOPs | Model | Log | Shell |
---|---|---|---|---|---|---|---|---|---|---|---|
UniFormer-S | 256x192 | 74.0 | 90.3 | 82.2 | 66.8 | 76.7 | 79.5 | 4.7G | run.sh/config | ||
UniFormer-S | 384x288 | 75.9 | 90.6 | 83.4 | 68.6 | 79.0 | 81.4 | 11.1G | run.sh/config | ||
UniFormer-S | 448x320 | 76.2 | 90.6 | 83.2 | 68.6 | 79.4 | 81.4 | 14.8G | run.sh/config | ||
UniFormer-B | 256x192 | 75.0 | 90.6 | 83.0 | 67.8 | 77.7 | 80.4 | 9.2G | run.sh/config | ||
UniFormer-B | 384x288 | 76.7 | 90.8 | 84.0 | 69.3 | 79.7 | 81.4 | 14.8G | run.sh/config | ||
UniFormer-B | 448x320 | 77.4 | 91.1 | 84.4 | 70.2 | 80.6 | 82.5 | 29.6G | run.sh/config |
Please refer to get_started for installation and dataset preparation.
-
Download the pretrained models in our repository.
-
Simply run the training scripts in exp as followed:
bash ./exp/top_down_256x192_global_small/run.sh
Or you can train other models as follower:
# single-gpu training python tools/train.py <CONFIG_FILE> --cfg-options model.backbone.pretrained_path=<PRETRAIN_MODEL> [other optional arguments] # multi-gpu training tools/dist_train.sh <CONFIG_FILE> <GPU_NUM> --cfg-options model.backbone.pretrained_path=<PRETRAIN_MODEL> [other optional arguments]
[Note]:
-
We use global MHRA during training and set the corresponding hyperparameters in the
config.py
:window: False, # whether use window MHRA hybrid: False, # whether use hybrid MHRA
-
To avoid out of memory, we can use
torch.utils.checkpoint
in theconfig.py
:use_checkpoint=True, # whether use checkpoint checkpoint_num=[0, 0, 2, 0], # index for using checkpoint in every stage
# single-gpu testing
python tools/test.py <CONFIG_FILE> <POSE_CHECKPOINT_FILE> --eval mAP
# multi-gpu testing
tools/dist_test.sh <CONFIG_FILE> <POSE_CHECKPOINT_FILE> <GPU_NUM> --eval mAP
This repository is built based on mmpose and HRT repository.