This repository contains the PyTorch implementation of TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation published at the ESWA journal. It contains easy instructions to replicate the results reported in the paper.
We propose a novel network arcitecture that we call TriHorn-Net. It consists of two stages. In the first stage, the input depth image is run through the encoder network f. The encoder extracts and combines low-level and high-level features of the hand and outputs a high resolution feature volume, which is passed on to three separate branches. The UV branch, computes a per-joint attention map, where each map is focused on pixels where the corresponding joint occurs. This behavior is explicitly enforced by the application of 2D supervision to the heatmaps computed by passing the attention maps through a special softmax layer. The second branch, called the attention enhancement branch, also computes a per-joint attention map but does so under no constraints, allowing it to freely learn to detect the hand pixels most important for estimating the joint depth values under different scenarios. This attention map enhances the attention map computed by the UV branch through a fusion operation, which is performed by a linear interpolation controlled by per-joint learnable parameters. As a result, the fused attention maps attend to not only the joint pixels but also the hand pixels that do not belong to joints but contain useful information for estimating the joint depth values. The fused attention map is then used as guidance for pooling features from the depth feature map computed by the depth branch. Finally, a weight-sharing linear layer is used to estimate the joint depth values from the feature vectors computed for each joint.
Download the repository:
makeReposit = [/the/directory/as/you/wish]
mkdir -p $makeReposit/; cd $makeReposit/
git clone https:https://github.com/mrezaei92/infrustructure_HPE.git
-
NYU dataset
Download and extract the dataset from the link provided below
Copy the content of the folder data/NYU to where the dataset is located
-
ICVL dataset
Download the file test.pickle from here
Download and extract the training set from the link provided below
Navigate to the folder data/ICVL. Run the following command to get a file named train.pickle:
python prepareICVL_train.py ICVLpath/Training
Here, ICVLpath represents the address where the training set is extractedPlace both test.pickle and train.pickle in one folder. This folder will serve as the ICVL dataset folder
-
MSRA dataset
Download and extract the dataset from the link provided below
Download and extract data/MSRA.tar.xz and copy its content to where the dataset is located
Before running the experiment, first set the value ”datasetpath” in the corresponding .yaml file located in the folder configs. This value should be set to the address of the corresponding dataset. Then open a terminal and run the corresponding command.
After running each command, training is first done, and then the resulting models will be evaluated on the corresponding test set.
The results will be saved in a file named ”results.txt”.
-
NYU
bash train_eval_NYU.bash
-
ICVL
bash train_eval_ICVL.bash
-
MSRA
bash train_eval_MSRA.bash
This repo supports using the following dataset for training and testing:
- ICVL Hand Poseture Dataset [link] [paper]
- NYU Hand Pose Dataset [link] [paper]
- MSRA Hand Pose Dataset [link] [paper]
The table below shows the predicted labels on ICVL, NYU and MSRA dataset. All labels are in the format of (u, v, d) where u and v are pixel coordinates.
Dataset | Predicted Labels |
---|---|
ICVL | Download |
NYU | Download |
MSRA | Download |
If you use this paper for your research or projects, please cite TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation.
@article{rezaei2023trihorn,
title={TriHorn-Net: A model for accurate depth-based 3D hand pose estimation},
author={Rezaei, Mohammad and Rastgoo, Razieh and Athitsos, Vassilis},
journal={Expert Systems with Applications},
pages={119922},
year={2023},
publisher={Elsevier}
}
Or
@article{rezaei2022trihorn,
title={TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation},
author={Rezaei, Mohammad and Rastgoo, Razieh and Athitsos, Vassilis},
journal={arXiv preprint arXiv:2206.07117},
year={2022}
}