You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions (CVPR 2020)
Download dataset
Original training done with CUDA 10.2
Install basic dependencies with pip install -r requirements.txt
Please generate:
- directory of homographies (see calc_homgraphy/README.md)
- directory of openpose predictions
- vocab.pkl (see vocab/build_vocab.py)
for your sample sequence.
Then run the following command:
python sample.py --vocab_path <path/to/sample_vocab.pkl> --output <path/to/output_dir> --encoder_path <path/to/trained/encoder.pth> --decoder_path <path/to/trained/decoder.pth> --upp
Change flag --upp
to --low
to test the lower body model.
Include flag --visualize
to plot the predicted stick figures.
Please generate
- directory of homographies (see calc_homgraphy/README.md)
- directory of openpose predictions
- vocab.pkl (see vocab/build_vocab.py)
- annotation.pkl (see vocab/build_annotation.py)
for your each of your training sequences.
Then run the following command:
python train.py --model_path <path/to/save/models> --vocab_path <path/to/train_vocab.pkl> --annotation_path <path/to/annotation.pkl> -upp
Change flag --upp
to --low
to train the lower body model.
CC-BY-NC 4.0. See the LICENSE file.
@article{ng2019you2me,
title={You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions},
author={Ng, Evonne and Xiang, Donglai and Joo, Hanbyul and Grauman, Kristen},
journal={CVPR},
year={2020}
}