This repository is currently under construction
-
Requirement
Python 3.8.0 PyTorch 1.11.0 OpenSim 4.3+
-
Python package
Clone this repo and run the following:
conda env create -f environment_setup.yml
Activate the environment using
conda activate d3ke
-
OpenSim 4.3
-
(On Windows)Install python API
-
In
installation_folder/OpenSim 4.x/sdk/Python
, runpython setup_win_python38.py python -m pip install .
-
-
(On other operating systems) Follow the instructions to setup the opensim scripting environment here
-
Copy all *.obj files from resources/opensim/geometry to <installation_folder>/OpenSim 4.x/Geometry
Note: Scripts requiring to import OpenSim are only verified on Windows.
- BMLmovi
- Register to get access to the downloads section.
- Download .avi videos of PG1 and PG2 cameras from the F round (F_PGX_Subject_X_L.avi).
- Download Camera Parameters.tar.
- Download v3d files (F_Subjects_1_45.tar).
- AMASS
- Download SMPL+H body data of BMLmovi.
- SMPL+H Models
- Register to get access to the downloads section.
- Download the extended SMPL+H model (used in AMASS project).
- DMPLs
- Register to get access to the downloads section.
- Download DMPLs for AMASS.
- PASCAL Visual Object Classes (ONLY NECESSARY FOR TRAINING)
- Download the training/validation data
-
Unpack the downloaded SMPL and DMPL archives into
ms_model_estimation/resources
-
Unpack the downloaded AMASS data into the top-level folder
resources/amass
-
Unpack the F_Subjects_1_45 folder and unpack content of all subfolders into
resources/V3D/F
Run the generate_opensim_gt script:
python generate_opensim_gt.py
This process might take several hours!
Once the dataset is generated the scaled OpenSim model and motion files can be found in resources/opensim/BMLmovi/BMLmovi
After the ground truth has been generated, the dataset needs to be prepared.
Run the prepare_dataset script and provide the location where the BMLMovi videos are stored:
python prepare_dataset.py --BMLMoviDir path/to/bmlmovi/videos
NOTE: for generating data for training you should also provide the path to the Pascal VOC dataset
python prepare_dataset.py --BMLMoviDir path/to/bmlmovi/videos --PascalDir path/to/pascal_voc/data
This process might again take several hours!
Run the run_inference script :
python run_inference.py
This will use D3KE to run predictions on the subset of BMLMovi used for testing.