- 📢 2024.04.24 Release the Windows Unity demo (GPU) trained in 100STYLE dataset.
- 📢 2024.06.23 Release the training code in PyTorch.
- 📢 2024.07.05 Release the inference code in Unity.
- 📢 2024.07.05 Release the evaluation code with datas.
Our project is developed with Unity, and features a real-time character control demo that generates high-quality and diverse character animations, responding in real-time to user-supplied control signals. With our character controller, you can control your character to move with any arbitrary style you want, all achieved through a single unified model.
A well-designed diffusion model is powering behind the demo, and it can be run efficiently on consumer-level GPUs or Apple Silicon MacBooks. For more information, please visit our project's homepage or the releases page to download the runnable program.
WASD: Move the character and control the character.
F: Switch between forward mode and orientation-fixed mode.
QE: Adjust the orientation in orientation-fixed mode.
J: Next style
L: Previous style
Left Shift: Run
Our project contains two main modules: Network training part with PyTorch, and demo with Unity. Both modules are open-sourced and can be accessed in this repository.
To train a character animation system, you first need a rigged character and its corresponding motion data. In our project, we provide an example with Y-Bot from Mixamo, which uses the standard Mixamo skeleton configuration. We also retargeted the 100STYLE dataset with the Mixamo skeleton by using ARP-Batch-Retargeting. Therefore, you can download any other character from Mixamo and drive it with our trained model.
For customized character and motion data, please wait for our further documentation to explain the retargeting and rigging process.
Diffusion Network Training [PyTorch]
All the training codes and documents can be found in the subfolder of our repository.
A practical training session using the entire 100STYLE dataset will take approximately one day, although acceptable checkpoints can usually be obtained after just a few hours (more than 4 hours). Following the completion of the network training, it's necessary to convert the saved checkpoints into the ONNX format. This allows them to be imported into Unity for use as a learning module. For more details, please check the subfolder.
Unity Inference [Unity]
Once you have obtained the ONNX file and its corresponding model configuration JSON files, you can import them into our Unity project and run your own demo. For a step-by-step tutorial, please visit our YouTube channel: tutorial.
In original paper, we used a 3060 GPU for inference, achieving performance of over 60 frames per second with the default settings. For more detailed information about the parameters, please refer to the paper.
We record all the motion results for different methods with a same control presets. You can access the data and metrics in the evaluation folder.
- Release Unity .exe demo. (2024.04.24)
- Release the training code in PyTorch. (2024.06.23)
- Release the inference code in Unity. (2024.07.05)
- Release the evaluation code. (TBA)
- Release the inference code to support any character control. (TBA)
This project is inspired by the following works. We appreciate their contributions, and please consider citing them if you find our project helpful.
@inproceedings{camdm,
title={Taming Diffusion Probabilistic Models for Character Control},
author={Rui Chen and Mingyi Shi and Shaoli Huang and Ping Tan and Taku Komura and Xuelin Chen},
booktitle={SIGGRAPH},
year={2024}
}
The unity code is released under the GPL-3 license, the rest of the source code is released under the Apache License Version 2.0