PyTorch implementation of the data-driven hand-object contact modeling experiments presented in the paper:
ContactPose: A Dataset of Grasps with Object Contact and Hand Pose -
Samarth Brahmbhatt, Chengcheng Tang, Christopher D. Twigg, Charles C. Kemp, and James Hays,
ECCV 2020.
Please visit http://contactpose.cc.gatech.edu to explore the dataset.
Note: This is the ML code for the ECCV 2020 paper. The ContactPose dataset API is here.
@InProceedings{Brahmbhatt_2020_ECCV,
author = {Brahmbhatt, Samarth and Tang, Chengcheng and Twigg, Christopher D. and Kemp, Charles C. and Hays, James},
title = {{ContactPose}: A Dataset of Grasps with Object Contact and Hand Pose},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {August},
year = {2020}
}
- Clone this repository
$ git clone git@github.com:samarth-robo/ContactPose-ML.git contactpose-ml
$ cd contactpose-ml
- Install Miniconda. Create the
contactpose_ml
conda environment:conda env create -f environment.yml
. Activate it:
$ source activate contactpose_ml
- Tested on PyTorch 1.2.0 (as mentioned in
environment.yml
). You will probably be able to use later versions, but no guarantees. Please create an issue if you run into problems. - Install
pytorch-geometric
from source code. Pip wheels can unfortunately not be used because of the outdated version of PyTorch. - Checkout the appropriate branch for the features used to predict contact:
These links are provided only for reference, you should not need to download them manually.
get_data.py
in each branch will download everything for you.
Learner | Features | Split | Link |
---|---|---|---|
MLP | simple-joints | objects | link |
MLP | relative-joints | objects | link |
MLP | skeleton | objects | link |
MLP | mesh | objects | link |
MLP | simple-joints | participants | link |
MLP | relative-joints | participants | link |
MLP | skeleton | participants | link |
MLP | mesh | participants | link |
PointNet++ | simple-joints | objects | link |
PointNet++ | relative-joints | objects | link |
PointNet++ | skeleton | objects | link |
PointNet++ | mesh | objects | link |
PointNet++ | simple-joints | participants | link |
PointNet++ | relative-joints | participants | link |
PointNet++ | skeleton | participants | link |
PointNet++ | mesh | participants | link |
VoxNet | skeleton | objects | link |
VoxNet | skeleton | participants | link |
Heuristic (10 pose params) | - | objects | link |
Heuristic (15 pose params) | - | objects | link |
Heuristic (10 pose params) | - | participants | link |
Heuristic (15 pose params) | - | participants | link |
enc-dec, PointNet++ | images (3 view) | objects | link |
enc-dec, PointNet++ | images (1 view) | objects | link |
enc-dec, PointNet++ | images (3 view) | participants | link |
enc-dec, PointNet++ | images (1 view) | participants | link |
- object model voxelizations
- 3D models of objects
- Pre-computed "prediction data":