After training, we use the final (or the best) checkpoint (trained till 50 epoch) for testing.
We conduct training on multiple 2D and 3D datasets, including Human3.6M, COCO, MUCO, UP3D, MPII. During training, it will evaluate the performance per epoch, and save the best checkpoints.
python -m torch.distributed.launch --nproc_per_node=8 src/tools/run_deformer_bodymesh2.py \
--train_yaml Tax-H36m-coco40k-Muco-UP-Mpii/train.yaml \
--val_yaml human3.6m/valid.protocol2.yaml \
--num_workers 16 \
--per_gpu_train_batch_size 16 \
--per_gpu_eval_batch_size 16 \
--lr 1e-4 \
--num_train_epochs 50 \
--data_dir 'path_to_dataset' \
--output_dir 'path_to_exp_dir' \
--logging_steps 500 \
--backbone hrnet-w48 \
--return_interm_indices 0.1.2.3 \
--decoder_type deformable_transformer \
--down_sample_level 2 \
--mask_type adj_skin
In the following script, we evaluate our model deformer_h36m_state_dict.bin
on Human3.6M validation set. Check docs/DOWNLOAD.md
for more details about downloading the model file.
python -m torch.distributed.launch --nproc_per_node=8 src/tools/run_deformer_bodymesh2.py \
--train_yaml Tax-H36m-coco40k-Muco-UP-Mpii/train.yaml \
--val_yaml human3.6m/valid.protocol2.yaml \
--num_workers 16 \
--per_gpu_train_batch_size 16 \
--per_gpu_eval_batch_size 16 \
--lr 1e-4 \
--num_train_epochs 50 \
--data_dir 'path_to_dataset' \
--logging_steps 500 \
--backbone hrnet-w48 \
--return_interm_indices 0.1.2.3 \
--decoder_type deformable_transformer \
--down_sample_level 2 \
--mask_type adj_skin \
--run_eval_only \
--resume_checkpoint ./models/deformer_release/deformer_h36m_state_dict.bin \
--resume_checkpoint_scaling ./models/deformer_release/deformer_h36m_state_dict_s.bin
It should print the result like the following:
DeFormer INFO: Validation epoch: 0 mPVE: 0.00, mPJPE: 45.35, mPJPE_smpl: 44.35, PAmPJPE: 31.17