This is a PyTorch implementation of our paper: A Probabilistic Attention Model with Occlusion-aware Texture Regression for 3D Hand Reconstruction from a Single RGB Image(CVPR 2023)
- Python (>=3.7)
- PyTorch (>=1.7.1)
- torchvision (>=0.8.2)
- cuda (>=11.0)
- PyTorch3D (>=0.3.0)
- Download the FreiHAND dataset from the website.
- Download the HO3D dataset from the website, and annotation files from [here].
You need to put HO3D dataset to
${REPO_DIR}/data
file, and annotation files to${REPO_DIR}/data/HO3D_v2/annotations/
file.
- Download manopth, and put the file to
${REPO_DIR}/manopth
. - Download
MANO_RIGHT.pkl
from here, and put the file to${REPO_DIR}/AMVUR/modeling/data
.
Download the cls_hrnet_w64_sgd_lr5e-2_wd1e-4_bs32_x100.yaml
and hrnetv2_w64_imagenet_pretrained.pth
from HRNet models, and put them to ${REPO_DIR}/models/hrnet
.
-
Supervised Experiment
Evaluation: Our pre-trained model can be downloaded from here, and put the file to
${REPO_DIR}/pre_trained
.Run:
python -m torch.distributed.launch --nproc_per_node=4 \ experiments/supervised_HO3D_v2.py \ --config_json ./experiments/config/test.json
It will generate a prediction file called
pred.zip
. Afte that, please submit the prediction file to codalab challenge and see the results.Training:
python -m torch.distributed.launch --nproc_per_node=4 \ experiments/supervised_HO3D_v2.py \ --config_json ./experiments/config/train.json
-
Weakly Supervised Experiment
Evaluation:
python -m torch.distributed.launch --nproc_per_node=4 \ experiments/weakly_supervised_HO3D_v2.py \ --config_json ./experiments/config/test.json
Training:
python -m torch.distributed.launch --nproc_per_node=4 \ experiments/weakly_supervised_HO3D_v2.py \ --config_json ./experiments/config/train.json
@inproceedings{jiang2023probabilistic,
title={A Probabilistic Attention Model with Occlusion-aware Texture Regression for 3D Hand Reconstruction from a Single RGB Image},
author={Jiang, Zheheng and Rahmani, Hossein and Black, Sue and Williams, Bryan M},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={758--767},
year={2023}
}