Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
/ you2me Public archive

Inferring Body Pose in Egocentric Video via First and Second Person Interactions

License

Notifications You must be signed in to change notification settings

facebookresearch/you2me

Repository files navigation

report

Install

Download dataset

Original training done with CUDA 10.2

Install basic dependencies with pip install -r requirements.txt

Test

Please generate:

  • directory of homographies (see calc_homgraphy/README.md)
  • directory of openpose predictions
  • vocab.pkl (see vocab/build_vocab.py)

for your sample sequence.

Then run the following command:

python sample.py --vocab_path <path/to/sample_vocab.pkl> --output <path/to/output_dir> --encoder_path <path/to/trained/encoder.pth> --decoder_path <path/to/trained/decoder.pth> --upp

Change flag --upp to --low to test the lower body model.

Include flag --visualize to plot the predicted stick figures.

Train

Please generate

  • directory of homographies (see calc_homgraphy/README.md)
  • directory of openpose predictions
  • vocab.pkl (see vocab/build_vocab.py)
  • annotation.pkl (see vocab/build_annotation.py)

for your each of your training sequences.

Then run the following command:

python train.py --model_path <path/to/save/models> --vocab_path <path/to/train_vocab.pkl> --annotation_path <path/to/annotation.pkl> -upp

Change flag --upp to --low to train the lower body model.

License

CC-BY-NC 4.0. See the LICENSE file.

Citation

@article{ng2019you2me,
  title={You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions},
  author={Ng, Evonne and Xiang, Donglai and Joo, Hanbyul and Grauman, Kristen},
  journal={CVPR},
  year={2020}
}

About

Inferring Body Pose in Egocentric Video via First and Second Person Interactions

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published