Skip to content

Official repository of "CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement".

Notifications You must be signed in to change notification settings

leolyliu/CORE4D-Instructions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

✨ CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement ✨

Official repository of "CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement".

Authors

Chengwen Zhang*, Yun Liu*, Ruofan Xing, Bingda Tang, Li Yi

Data Update Records

  • 2024/8/17: Uploaded V2 of CORE4D-Real, including updated human motions in "CORE4D_Real_human_object_motions_v2"
  • 2024/5/31: Uploaded CORE4D-V1

Data Organization

The data is organized as follows:

|--CORE4D_Real
    |--object_models
        ...
    |--human_object_motions
        ...
    |--allocentric_RGBD_videos
        ...
    |--egocentric_RGB_videos
        ...
    |--human_object_segmentations
        ...
    |--camera_parameters
        ...
    |--action_labels.json
|--CORE4D_Synthetic
    |-- <motion sequence name 1>
        |-- human_poses.npy
        |-- object_mesh.obj
        |-- object_poses.npy
    |-- <motion sequence name 2>
        |-- human_poses.npy
        |-- object_mesh.obj
        |-- object_poses.npy
    ...

File Definitions

Please refer to docs/file_definitions.md for details of our dataset.

Data Visualization

[1] Environment setup

Our code is tested on Ubuntu 20.04 with one NVIDIA GeForce RTX 3090 GPU. The Driver version is 535.129.03. The CUDA version is 12.2.

Please use the following command to set up the environment:

conda create -n core4d python=3.9
conda activate core4d
<install PyTorch >= 1.7.1>
<install PyTorch3D >= 0.6.1>
cd dataset_utils
pip install -r requirements.txt

Then, install smplx from smplx, and download SMPL-X models.

[2] Visualize human-object motions

cd dataset_utils
python visualize_human_object_motion.py --dataset_root <dataset root directory> --object_model_root <object model root directory> --smplx_model_dir <SMPL-X model directory> --sequence_name <sequence name> --save_path <path to save the visualization result> --device <device for the rendering process>

For example, if you select the following data sequence:

python visualize_human_object_motion.py --dataset_root <dataset root directory> --object_model_root <object model root directory> --smplx_model_dir <SMPL-X model directory> --sequence_name "20231002/004" --save_path "./example.gif" --device "cuda:0"

You can obtain the following visualization result:

Benchmark Codes

For the implementation of the benchmark "human-object motion forecasting", please refer to ./benchmarks/motion_forecasting/README.md.

For the implementation of the benchmark "interaction synthesis", please refer to ./benchmarks/interaction_synthesis/README.md.

License

This work is licensed under a CC BY 4.0 license.

Email

If you have any questions, please feel free to contact zcwoctopus@gmail.com or yun-liu22@mails.tsinghua.edu.cn.

Citation

If you find our work helpful, please cite:

@article{zhang2024core4d,
  title={CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement},
  author={Zhang, Chengwen and Liu, Yun and Xing, Ruofan and Tang, Bingda and Yi, Li},
  journal={arXiv preprint arXiv:2406.19353},
  year={2024}
}

About

Official repository of "CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published