- Cuda 11.6
- Python 3.10.4
- Pytorch 1.12.1
Please download the dataset from Human3.6M website and refer to VideoPose3D to set up the Human3.6M dataset ('./dataset' directory). Or you can download the processed data from here.
${POSE_ROOT}/
|-- dataset
| |--h36m
| | |-- data_3d_h36m.npz
| | |-- data_2d_h36m_gt.npz
| | |-- data_2d_h36m_cpn_ft_h36m_dbb.npz
To test on pretrained models on Human3.6M:
python main.py --test --reload --refine_reload --refine --frames 81 --previous_dir checkpoint/posegraphnet-T-data-aug/0118_1113_07_81
To train on Human3.6M:
python pretrain_posegraphnet.py --nepoch 20 --frames 1
Alternatively, you can download the pretrained PoseGraphNet here.
python main.py --frames 81 --occlusion_augmentation_train --num_occluded_j 1 --consecutive_frames --subset_size 6 --pretrained_spatial_module_init --pretrained_spatial_module_dir [your pre-trained PoseGraphNet directory path] --pretrained_spatial_module [your pre-trained PoseGraphNet file name inside directory]
python main.py --frames 81 --occlusion_augmentation_train --num_occluded_j 1 --consecutive_frames --subset_size 6 --reload --spatial_module_lr 1e-3 --previous_dir [your phase-1 model saved directory path]
python main.py --frames 81 --reload --occlusion_augmentation_train --num_occluded_j 1 --consecutive_frames --subset_size 6 --spatial_module_lr 1e-3 --refine --lr_refine 1e-3 --previous_dir [your phase-2 model saved directory path]
To test a model's robustness against missing keypoints (occluding a specific joint [0-16] across 30 frames):
cd scripts/occlusion_robustness_analysis
python joint_importance_analysis_posegraphnet-T.py --frames 81 --previous_dir ../../checkpoint/PATH/TO/MODEL_DIR --root_path ../../dataset
The code is built on top of StridedTransformer.