The release of the training codes will be delayed due to company review requirements.
- Add training codes
- Add evaluation codes
- Release checkpoints
- Release processed dataset
- Add codes for dataset
- Add baseline implementations
Contact ymu3@ualberta.ca for further questions. A rough codebase for our method could be found through our OpenReview page (Download).
Motion_based (supervised):
python generate_cmu_l.py --name LVAE_AE_RCE1_KGLE1_121_YL_ML160 --gpu_id 0 --dataset_name bfa --motion_length 160 --ext cmu_NSP_IK --use_style --batch_size 12 --use_ik --niters 1
Ours label_based (supervised):
python generate_cmu_l.py --name LVAE_AE_RCE1_KGLE1_121_YL_ML160 --gpu_id 0 --dataset_name bfa --motion_length 160 --ext cmu_SP_IK --use_style --batch_size 12 --use_ik --niters 1 --sampling
Ours motion_based (unsupervised):
python generate_cmu_l.py --name LVAE_AE_RCE0_KGLE2_12E1_ML160 --gpu_id 0 --dataset_name bfa --motion_length 160 --ext cmu_SP_IK --batch_size 12 --use_ik --niters 1
The baseline models are implemented in the subfolders (i.e. ./baseline/unpaired_motion
, ./baseline/diverse_stylize
, ./baseline/motion_puzzle
), built from their offical implementations on github. For more details, please refer to the official repositories Aberman et al., Park et al., Jang et al..
All training and testing scripts are documented in ./$baseline_path/eval_scripts.txt
.
We have intensively borrow codes from the following repositories. Many thanks to the authors for sharing their codes.