[Preprint]
conda create python=3.7 --name jotr
conda activate jotr
pip install torch==1.8.0 torchvision==0.9.0
sh requirements.sh
We prepare the data in a samilar way as 3DCrowdNet. Please refer to 3DCrowdNet for dataset, SMPL model, VPoser model, and backbone pre-trained weights.
Download the annotations of 3DPW-PC, 3DPW-OC.
Download checkpoint of JOTR from here.
The data directory should be organized as follows:
${ROOT}
|-- checkpoint
|-- 3dpw_best_ckpt.pth.tar
|-- 3dpw-crowd_best_ckpt.pth.tar
|-- 3dpw-oc_best_ckpt.pth.tar
|-- 3dpw-pc_best_ckpt.pth.tar
|-- data
| |-- J_regressor_extra.npy
| |-- snapshot_0.pth.tar
| |-- 3DPW
| | |-- 3DPW_latest_test.json
| | |-- 3DPW_oc.json
| | |-- 3DPW_pc.json
| | |-- 3DPW_validation_crowd_hhrnet_result.json
| | |-- imageFiles
| | |-- sequenceFiles
| |-- CrowdPose
| | |-- annotations
| | |-- images
| |-- Human36M
| | |-- images
| | |-- annotations
| | |-- J_regressor_h36m_correct.npy
| |-- MSCOCO
| | |-- images
| | | |-- train2017
| | |-- annotations
| | |-- J_regressor_coco_hip_smpl.npy
| |-- MuCo
| | |-- augmented_set
| | |-- unaugmented_set
| | |-- MuCo-3DHP.json
| | |-- smpl_param.json
Reproduce the results in the paper (Table 1 and Table 2) by running the following command:
sh eval.sh
Train the model by running the following command:
sh train.sh
TODO
Thanks to 3DCrowdNet, DETR, AutomaticWeightedLoss, deep_training and PositionalEncoding2D, our code is partially borrowing from them.
This code is distributed under an MIT LICENSE.
Note that our code depends on other libraries, including SMPL, VPoser, and uses datasets that have their own licenses. Please refer to the corresponding websites for more details.
@article{li2023jotr,
title={JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human Mesh Recovery},
author={Li, Jiahao and Yang, Zongxin and Wang, Xiaohan and Ma, Jianxin and Zhou, Chang and Yang, Yi},
journal={ICCV},
year={2023}
}