The BlendMimic3D-DataExtractor is designed to convert .fbx files to .npy format, specifically for human motion analysis. It was instrumental in generating BlendMimic3D dataset from our project. For more detailed information about to our project please visit our project webpage.
This repository contains scripts that require Blender to run.
Blender can be downloaded from the official website.
- Blender
- Python
3D_extraction.py
: Converts .fbx files to .npz files containing 3D joint data.camParams.py
: Extracts camera parameters used in the animations.2D_extraction.py
: Extracts 2D joint data by projecting the 3D joint data onto 2D space using camera parameters.occlusion.py
: Determines the presence of occlusions in the dataset.fbx2jason/
: Intended for storing .fbx files converted to JSON.regular/
: Default directory for placing sample .fbx files.
- Clone or download this repository to your local machine.
- Ensure Blender is installed and added to your system PATH.
Before running the scripts, download a sample .fbx file from your Blender animation (link) and place it into the regular
folder.
- Move the .fbx file to the
regular
folder. - Open a command prompt with administrator permissions.
- Change to the directory containing
3D_extraction.py
. - Execute the script with the following command:
blender --background -P 3D_extraction.py -- --joint-id 8 --armature-name Armature --subject S1
- Change to the directory containing
camParams.py
. - Execute the script with the following command:
blender --background animation.blend --python camParams.py -- S1
- Change to the directory containing
2D_extraction.py
. - Execute the script with the following command:
blender --background animation.blend --python 2D_extraction.py -- S1 action_name
- Change to the directory containing
occlusion.py
. - Execute the script with the following command:
blender --background animation.blend --python occlusion.py -- --joint-id 8 --armature-name Armature --subject S1
Contributions are welcome. Please open an issue or submit a pull request with your suggested changes.
If you use our code in your research, please cite our paper:
@inproceedings{lino20243d,
title={3D Human Pose Estimation with Occlusions: Introducing BlendMimic3D Dataset and GCN Refinement},
author={Lino, Filipa and Santiago, Carlos and Marques, Manuel},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4646--4656},
year={2024}
}
This work was supported by LARSyS funding (DOI: 10.54499/LA/P/0083/2020, 10.54499/UIDP/50009/2020, and 10.54499/UIDB/50009/2020) and 10.54499/2022.07849.CEECIND/CP1713/CT0001, through Fundação para a Ciência e a Tecnologia, and by the SmartRetail project [PRR - C645440011-00000062], through IAPMEI - Agência para a Competitividade e Inovação.