This is the official repository of the ACR.
🔥(CVPR 2023) ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction
Zhengdi Yu, Shaoli Huang, Chen Fang, Toby P. Breckon, Jue Wang
Conference on Computer Vision and Pattern Recognition (CVPR), 2023
[Paper][Project Page][Video]
- [2023/03/24] Code release! ⭐
- [2023/03/10] ACR is on arXiv now.
- [2023/02/27] ACR got accepted by CVPR 2023! 🎉
conda create -n ACR python==3.8.8
conda activate ACR
conda install -n ACR pytorch==1.10.0 torchvision==0.11.1 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt
For rendering and visualization on headless server, please consider install pytorch3d
follow the official instruction and set renderer
to pytorch3d
in configs/demo.yml
. Note that pyrender
can only be used on desktop.
- Register and download MANO model. Put
MANO_LEFT.pkl
andMANO_RIGHT.pkl
inmano/
- Download pre-trained weights from here (update on 3.28) and put it in
checkpoints/
Note: use -t
to smooth your results. We provide examples in demos/
# Run a real-time demo:
python -m acr.main --demo_mode webcam -t
# Run on a single image:
python -m acr.main --demo_mode image --inputs <PATH_TO_IMAGE>
# Run on a folder of images:
python -m acr.main --demo_mode folder -t --inputs <PATH_TO_FOLDER>
# Run on a video:
python -m acr.main --demo_mode video -t --inputs <PATH_TO_VIDEO>
Finally, the visualization will be saved in demos_outputs/
. In video
or folder
mode, the results will also be saved as <FILENAME>_output.mp4
.
@inproceedings{yu2023acr,
title = {ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction},
author = {Yu, Zhengdi and Huang, Shaoli and Chen, Fang and Breckon, Toby P. and Wang, Jue},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023}
}
The pytorch implementation of MANO is based on manopth. We use some parts of the great code from ROMP. For MANO segmentation and rendering, we follow zc-alexfan. We thank all the authors for their impressive works!
For technical questions, please contact zhengdi.yu@durham.ac.uk or ZhengdiYu@hotmail.com
For commercial licensing, please contact shaolihuang@tencent.com