Zhenghao Zhang*, Junchao Liao*, Menghao Li, Zuozhuo Dai, Bingxue Qiu, Siyu Zhu, Long Qin, Weizhi Wang
* equal contribution
This is the official repository for paper "Tora: Trajectory-oriented Diffusion Transformer for Video Generation".
Recent advancements in Diffusion Transformer (DiT) have demonstrated remarkable proficiency in producing high-quality video content. Nonetheless, the potential of transformer-based diffusion models for effectively generating videos with controllable motion remains an area of limited exploration. This paper introduces Tora, the first trajectory-oriented DiT framework that integrates textual, visual, and trajectory conditions concurrently for video generation. Specifically, Tora consists of a Trajectory Extractor (TE), a Spatial-Temporal DiT, and a Motion-guidance Fuser (MGF). The TE encodes arbitrary trajectories into hierarchical spacetime motion patches with a 3D video compression network. The MGF integrates the motion patches into the DiT blocks to generate consistent videos following trajectories. Our design aligns seamlessly with DiTβs scalability, allowing precise control of video contentβs dynamics with diverse durations, aspect ratios, and resolutions. Extensive experiments demonstrate Toraβs excellence in achieving high motion fidelity, while also meticulously simulating the movement of physical world.
2024/12/13
SageAttention2 and model compilation are supported in diffusers version. Tested on the A10, these approaches speed up every inference step by approximately 52%, except for the first step.2024/12/09
π₯π₯Diffusers version of Tora and the corresponding model weights are released. Inference VRAM requirements are reduced to around 5 GiB. Please refer to this for details.2024/11/25
π₯Text-to-Video training code released.2024/10/31
Model weights uploaded to HuggingFace. We also provided an English demo on ModelScope.2024/10/23
π₯π₯Our ModelScope Demo is launched. Welcome to try it out! We also upload the model weights to ModelScope.2024/10/21
Thanks to @kijai for supporting Tora in ComfyUI! Link2024/10/15
π₯π₯We released our inference code and model weights. Please note that this is a CogVideoX version of Tora, built on the CogVideoX-5B model. This version of Tora is meant for academic research purposes only. Due to our commercial plans, we will not be open-sourcing the complete version of Tora at this time.2024/08/27
We released our v2 paper including appendix.2024/07/31
We submitted our paper on arXiv and released our project page.
- ποΈ Showcases
- β TODO List
- 𧨠Diffusers verision
- π Installation
- π¦ Model Weights
- π Inference
- π₯οΈ Gradio Demo
- π§ Training
- π― Troubleshooting
- π€ Acknowledgements
- π Our previous work
- π Citation
Tora_CogVideoX_demo1.mp4
Tora_CogVideoX_demo2.mp4
Tora_CogVideoX_demo3.mp4
All videos are available in this Link
- Release our inference code and model weights
- Provide a ModelScope Demo
- Release our training code
- Release diffusers version and optimize the GPU memory usage
- Release complete version of Tora
Please refer to the diffusers version for details.
Please make sure your Python version is between 3.10 and 3.12, inclusive of both 3.10 and 3.12.
# Clone this repository.
git clone https://github.com/alibaba/Tora.git
cd Tora
# Install Pytorch (we use Pytorch 2.4.0) and torchvision following the official instructions: https://pytorch.org/get-started/previous-versions/. For example:
conda create -n tora python==3.10
conda activate tora
conda install pytorch==2.4.0 torchvision==0.19.0 pytorch-cuda=12.1 -c pytorch -c nvidia
# Install requirements
cd modules/SwissArmyTransformer
pip install -e .
cd ../../sat
pip install -r requirements.txt
cd ..
Tora
βββ sat
βββ ckpts
βββ t5-v1_1-xxl
β βββ model-00001-of-00002.safetensors
β βββ ...
βββ vae
β βββ 3d-vae.pt
βββ tora
β βββ t2v
β βββ mp_rank_00_model_states.pt
βββ CogVideoX-5b-sat # for training stage 1
βββ mp_rank_00_model_states.pt
Note: Downloading the tora
weights requires following the CogVideoX License. You can choose one of the following options: HuggingFace, ModelScope, or native links.
After downloading the model weights, you can put them in the Tora/sat/ckpts
folder.
# This can be faster
pip install "huggingface_hub[hf_transfer]"
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download Le0jc/Tora --local-dir ckpts
or
# use git
git lfs install
git clone https://huggingface.co/Le0jc/Tora
- SDK
from modelscope import snapshot_download
model_dir = snapshot_download('xiaoche/Tora')
- Git
git clone https://www.modelscope.cn/xiaoche/Tora.git
- Download the VAE and T5 model following CogVideo:
- Tora t2v model weights: Link. Downloading this weight requires following the CogVideoX License.
It requires around 30 GiB GPU memory tested on NVIDIA A100.
cd sat
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True torchrun --standalone --nproc_per_node=$N_GPU sample_video.py --base configs/tora/model/cogvideox_5b_tora.yaml configs/tora/inference_sparse.yaml --load ckpts/tora/t2v --output-dir samples --point_path trajs/coaster.txt --input-file assets/text/t2v/examples.txt
You can change the --input-file
and --point_path
to your own prompts and trajectory points files. Please note that the trajectory is drawn on a 256x256 canvas.
Replace $N_GPU
with the number of GPUs you want to use.
For text prompts, we highly recommend using GPT-4 to enhance the details. Simple prompts may negatively impact both visual quality and motion control effectiveness.
You can refer to the following resources for guidance:
Usage:
cd sat
python app.py --load ckpts/tora/t2v
Following this guide https://github.com/THUDM/CogVideo/blob/main/sat/README.md#preparing-the-dataset, structure the datasets as follows:
.
βββ labels
β βββ 1.txt
β βββ 2.txt
β βββ ...
βββ videos
βββ 1.mp4
βββ 2.mp4
βββ ...
Training data examples are in sat/training_examples
It requires around 60 GiB GPU memory tested on NVIDIA A100.
Replace $N_GPU
with the number of GPUs you want to use.
- Stage 1
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True torchrun --standalone --nproc_per_node=$N_GPU train_video.py --base configs/tora/model/cogvideox_5b_tora.yaml configs/tora/train_dense.yaml --experiment-name "t2v-stage1"
- Stage 2
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True torchrun --standalone --nproc_per_node=$N_GPU train_video.py --base configs/tora/model/cogvideox_5b_tora.yaml configs/tora/train_sparse.yaml --experiment-name "t2v-stage2"
Upgrade the transformers package to 4.44.2. See this issue.
We would like to express our gratitude to the following open-source projects that have been instrumental in the development of our project:
- CogVideo: An open source video generation framework by THUKEG.
- Open-Sora: An open source video generation framework by HPC-AI Tech.
- MotionCtrl: A video generation model supporting motion control by ARC Lab, Tencent PCG.
- ComfyUI-DragNUWA: An implementation of DragNUWA for ComfyUI.
Special thanks to the contributors of these libraries for their hard work and dedication!
@misc{zhang2024toratrajectoryorienteddiffusiontransformer,
title={Tora: Trajectory-oriented Diffusion Transformer for Video Generation},
author={Zhenghao Zhang and Junchao Liao and Menghao Li and Zuozhuo Dai and Bingxue Qiu and Siyu Zhu and Long Qin and Weizhi Wang},
year={2024},
eprint={2407.21705},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.21705},
}