Skip to content

Official PyTorch implementation of EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views

Notifications You must be signed in to change notification settings

yyvhang/EgoChoir_release

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Website Badge arXiv

EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views (NeurIPS 2024)

PyTorch implementation of EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views.

📖 To Do List

    • release the training, evaluation, and inference code.
    • release the pretrained checkpoint.
    • release the collected dataset.

📋 Table of content

  1. ❗ Overview
  2. 💡 Requirements
  3. 📖 Dataset
  4. ✏️ Usage
    1. Environment
    2. Train
    3. Evaluation
    4. Inference
  5. ✉️ Statement
  6. 🔍 Citation

❗Overview

EgoChoir seek to estimate 3D human contact and obejct affordance from egocentric videos:


💡Requirements

(1) Download the smpl_neutral_geodesic_dist.npy and put it under the folder data/, this is used to compute the metrics geo. We also make the smplx_neutral_geodesic_dist.npy, download here.
(2) Download the pre-trained HRNet, put .pth file under the folder tools/models/hrnet/config/hrnet/.
(3) Download the pre-trained EgoChoir from Baidu Pan, key: grru, or Google Drive. Put the checkpoint file under the folder runs/. Note: We have integrated the weights of the motion encoder into the checkpoint, you don't need to pre-train it.

📖Dataset

The released dataset includes the following data:
(1) video clips from GIMO and EgoExo-4D.
(2) 3D human contact sequence.
(3) 3D objects with affordance annoatation.
(4) Head motion sequence.

Download the dataset from Baidu Pan, key: 2zjt, around 110G. We will upload the data to other storage spaces for downloading without a Baidu account.

✏️ Usage

Environment

First clone this respository and create a conda environment, as follows:

git clone https://github.com/yyvhang/EgoChoir_release.git
cd EgoChoir_release
conda create -n egochoir python=3.10 -y
conda activate egochoir
#install pytorch 2.0.1
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia

Then, install the other dependancies:

pip install -r requirements.txt

Train

If you want to train EgoChoir, please run the following command, you could modify the parameter at configs/EgoChoir.yaml.

bash run.sh

Evaluation

Run the following command to evaluate the model.

python eval.py --config config/EgoChoir.yaml --use_gpu True --train_device single

Inference

Run the following command to infer the results, including the sampled frames and the whole sequence.

python infer.py --config config/EgoChoir.yaml --use_gpu True --train_device single

✉️ Statement

This project is for research purpose only, please contact us for the licence of commercial use. For any other questions please contact yyuhang@mail.ustc.edu.cn.

🔍 Citation

@article{yang2024egochoir,
  title={EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views},
  author={Yang, Yuhang and Zhai, Wei and Wang, Chengfeng and Yu, Chengjun and Cao, Yang and Zha, Zheng-Jun},
  journal={arXiv preprint arXiv:2405.13659},
  year={2024}
}

About

Official PyTorch implementation of EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published