This is the readme file for the code release of "EHFusion: An efficient heterogeneous fusion model for group-based 3D human pose estimation" on PyTorch platform.
Make sure you have the following dependencies installed:
- Ubuntu 20.04
- CUDA 11.2
- Python 3.7.13
- PyTorch 1.8.1
- Matplotlib=3.1.0
You can create the environment as follows:
pip install -r requirements.txt
Our model is evaluated on Human3.6M and HumanEva-I datasets.
We set up the Human3.6M dataset in the same way as VideoPose3D.
We set up the HumanEva-I dataset in the same way as VideoPose3D.
python run_onestage.py -k cpn_ft_h36m_dbb --stage 1 -lfd 512 -e 80
For the first stage, run:
python run_threestage.py -k cpn_ft_h36m_dbb --stage 1 -lfd 512 -e 80
For the second stage, run:
python run_threestage.py -k cpn_ft_h36m_dbb --stage 2 -lfd 512 -p stage_1_best_model.bin -e 80
For the third stage, run:
python run_threestage.py -k cpn_ft_h36m_dbb --stage 3 -lfd 512 -ft stage_2_best_model.bin -lr 0.0005 -e 80
You can download our pre-trained models from Google Drive. Put CPN/cpn_one-stage_best_epoch.bin
, CPN/cpn_three-stage_3_best_epoch.bin
, GT/gt_one-stage_best_epoch.bin
and GT/gt_three-stage_3_best_epoch.bin
in the ./checkpoint
directory. Both of the models are trained on Human3.6M dataset.
To evaluate the one-stage model trained on the 2D keypoints obtained by CPN, run:
python run_onestage.py -k cpn_ft_h36m_dbb --evaluate cpn_one-stage_best_epoch.bin --stage 1 -lfd 512
To evaluate the three-stage model trained on the 2D keypoints obtained by CPN, run:
python run_threestage.py -k cpn_ft_h36m_dbb --evaluate cpn_three-stage_3_best_epoch.bin --stage 3 -lfd 512
To evaluate the one-stage model trained on the ground-truth 2D keypoints, run:
python run_onestage.py -k gt --evaluate gt_one-stage_best_epoch.bin --stage 1 -lfd 256
To evaluate the three-stage model trained on the ground-truth 2D keypoints, run:
python run_threestage.py -k gt --evaluate gt_three-stage_3_best_epoch.bin --stage 3 -lfd 256
Our code refers to the following repositories.
We thank the authors for releasing their codes. If you use our code, please consider citing their works as well.