VGen is an open-source video synthesis codebase developed by the Tongyi Lab of Alibaba Group, featuring state-of-the-art video generative models. This repository includes implementations of the following methods:
- I2VGen-xl: High-quality image-to-video synthesis via cascaded diffusion models
- VideoComposer: Compositional Video Synthesis with Motion Controllability
- Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation
- A Recipe for Scaling up Text-to-Video Generation with Text-free Videos
- InstructVideo: Instructing Video Diffusion Models with Human Feedback
- DreamVideo: Composing Your Dream Videos with Customized Subject and Motion
- VideoLCM: Video Latent Consistency Model
- Modelscope text-to-video technical report
VGen can produce high-quality videos from the input text, images, desired motion, desired subjects, and even the feedback signals provided. It also offers a variety of commonly used video generation tools such as visualization, sampling, training, inference, join training using images and videos, acceleration, and more.
- [2024.06] We release the code and models of InstructVideo. InstructVideo enables the LoRA fine-tuning and inference in VGen. Feel free to use LoRA fine-tuning for other tasks.
- [2024.04] We release the models of DreamVideo and ModelScopeT2V V1.5!!! ModelScopeT2V V1.5 is further fine-tuned on ModelScopeT2V for 365k iterations with more data.
- [2024.04] We release the code and models of TF-T2V!
- [2024.04] We release the code and models of VideoLCM!
- [2024.03] We release the training and inference code of DreamVideo!
- [2024.03] We release the code and model of HiGen!!
- [2024.01] The gradio demo of I2VGen-XL has been completed in HuggingFace, thanks to our colleague @Wenmeng Zhou and @AK for the support, and welcome to try it out.
- [2024.01] We support running the gradio app locally, thanks to our colleague @Wenmeng Zhou for the support and @AK for the suggestion, and welcome to have a try.
- [2024.01] Thanks @Chenxi for supporting the running of i2vgen-xl on . Feel free to give it a try.
- [2024.01] The gradio demo of I2VGen-XL has been completed in Modelscope, and welcome to try it out.
- [2023.12] We have open-sourced the code and models for DreamTalk, which can produce high-quality talking head videos across diverse speaking styles using diffusion models.
- [2023.12] We release TF-T2V that can scale up existing video generation techniques using text-free videos, significantly enhancing the performance of both Modelscope-T2V and VideoComposer at the same time.
- [2023.12] We updated the codebase to support higher versions of xformer (0.0.22), torch2.0+, and removed the dependency on flash_attn.
- [2023.12] We release InstructVideo that can accept human feedback signals to improve VLDM
- [2023.12] We release the diffusion based expressive talking head generation DreamTalk
- [2023.12] We release the high-efficiency video generation method VideoLCM
- [2023.12] We release the code and model of I2VGen-XL and the ModelScope T2V
- [2023.12] We release the T2V method HiGen and customizing T2V method DreamVideo.
- [2023.12] We write an introduction document for VGen and compare I2VGen-XL with SVD.
- [2023.11] We release a high-quality I2VGen-XL model, please refer to the Webpage
- Release the technical papers and webpage of I2VGen-XL
- Release the code and pretrained models that can generate 1280x720 videos
- Release the code and models of DreamTalk that can generate expressive talking head
- Release the code and pretrained models of HumanDiff
- Release models optimized specifically for the human body and faces
- Updated version can fully maintain the ID and capture large and accurate motions simultaneously
- Release other methods and the corresponding models
The main features of VGen are as follows:
- Expandability, allowing for easy management of your own experiments.
- Completeness, encompassing all common components for video generation.
- Excellent performance, featuring powerful pre-trained models in multiple tasks.
conda create -n vgen python=3.8
conda activate vgen
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
You also need to ensure that your system has installed the ffmpeg
command. If it is not installed, you can install it using the following command:
sudo apt-get update && apt-get install ffmpeg libsm6 libxext6 -y
We have provided a demo dataset that includes images and videos, along with their lists in data
.
Please note that the demo images used here are for testing purposes and were not included in the training.
git clone https://github.com/ali-vilab/VGen.git
cd VGen
Executing the following command to enable distributed training is as easy as that.
python train_net.py --cfg configs/t2v_train.yaml
In the t2v_train.yaml
configuration file, you can specify the data, adjust the video-to-image ratio using frame_lens
, and validate your ideas with different Diffusion settings, and so on.
- Before the training, you can download any of our open-source models for initialization. Our codebase supports custom initialization and
grad_scale
settings, all of which are included in thePretrain
item in yaml file. - During the training, you can view the saved models and intermediate inference results in the
workspace/experiments/t2v_train
directory.
After the training is completed, you can perform inference on the model using the following command.
python inference.py --cfg configs/t2v_infer.yaml
Then you can find the videos you generated in the workspace/experiments/test_img_01
directory. For specific configurations such as data, models, seed, etc., please refer to the t2v_infer.yaml
file.
If you want to directly load our previously open-sourced Modelscope T2V model, please refer to this link.
(i) Download model and test data:
!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('damo/I2VGen-XL', cache_dir='models/', revision='v1.0.0')
or you can also download it through HuggingFace (https://huggingface.co/damo-vilab/i2vgen-xl):
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/damo-vilab/i2vgen-xl
(ii) Run the following command:
python inference.py --cfg configs/i2vgen_xl_infer.yaml
or you can run:
python inference.py --cfg configs/i2vgen_xl_infer.yaml test_list_path data/test_list_for_i2vgen.txt test_model models/i2vgen_xl_00854500.pth
The test_list_path
represents the input image path and its corresponding caption. Please refer to the specific format and suggestions within demo file data/test_list_for_i2vgen.txt
. test_model
is the path for loading the model. In a few minutes, you can retrieve the high-definition video you wish to create from the workspace/experiments/test_list_for_i2vgen
directory. At present, we find that the current model performs inadequately on anime images and images with a black background due to the lack of relevant training data. We are consistently working to optimize it.
(iii) Run the gradio app locally:
python gradio_app.py
(iv) Run the model on ModelScope and HuggingFace:
Due to the compression of our video quality in GIF format, please click 'HRER' below to view the original video.
Input Image |
Click HERE to view the generated video. |
Input Image |
Click HERE to view the generated video. |
Input Image |
Click HERE to view the generated video. |
Input Image |
Click HERE to view the generated video. |
(ii) Run the following command:
python inference.py --cfg configs/i2vgen_xl_train.yaml
In a few minutes, you can retrieve the high-definition video you wish to create from the workspace/experiments/test_img_01
directory. At present, we find that the current model performs inadequately on anime images and images with a black background due to the lack of relevant training data. We are consistently working to optimize it.
(i) Download model:
!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/HiGen', cache_dir='models/')
Then you might need the following command to move the checkpoints to the "models/" directory:
mv ./models/iic/HiGen/* ./models/
(ii) Run the following command for text-to-video generation:
python inference.py --cfg configs/higen_infer.yaml
In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/text_list_for_t2v_share
directory.
Then you can execute the following command to perform super-resolution on the generated videos:
python inference.py --cfg configs/sr600_infer.yaml
Finally, you can retrieve the high-definition video from the workspace/experiments/text_list_for_t2v_share
directory.
Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.
Click HERE to view the generated video. |
Click HERE to view the generated video. |
Our DreamVideo uses ModelScopeT2V V1.5
as the base video diffusion model. ModelScopeT2V V1.5 is further fine-tuned on ModelScopeT2V for 365k iterations with more data.
!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/dreamvideo-t2v', cache_dir='models/')
Then you might need the following command to move the checkpoints to the "models/" directory:
mv ./models/iic/dreamvideo-t2v/* ./models/
Or you can download the checkpoint of ModelScopeT2V V1.5 and adapter weights of DreamVideo from this link.
(i) Subject Learning
Step 1: learn a textual identity using Textual Inversion.
python train_net.py --cfg configs/dreamvideo/subjectLearning/dog2_subjectLearning_step1.yaml
Step 2: train an identity adapter by incorporating the learned textual identity.
python train_net.py --cfg configs/dreamvideo/subjectLearning/dog2_subjectLearning_step2.yaml
Tips:
- Generally, step 1 takes
1500 to 3000
training steps, and step 2 takes500 to 1000
training steps. For certain subjects (like cats, etc.), excessive training may generate unnatural videos, and using text embedding with fewer training steps or reducing the training steps of step 2 may help. - For some subjects (like dogs, etc.), setting
use_mask_diffusion
toTrue
may achieve better results. Make sure to put the binary masks of the subject into the folderdata/images/custom/YOUR_SUBJECT/masks
, and you can use SAM to obtain these masks.
(ii) Motion Learning
Train a motion adapter on the given videos.
python train_net.py --cfg configs/dreamvideo/motionLearning/carTurn_motionLearning.yaml
You can customize your own configuration files for subject/motion learning.
Tips:
- Generally, motion learning takes
500 to 2000
training steps. - Try setting
p_image_zero
from0 to 0.5
to adjust the effect of appearance guidance during training. - Try increasing training steps or increasing the learning rate for single video motion customization to better align the motion pattern.
(i) Subject Customization
python inference.py --cfg configs/dreamvideo/infer/subject_dog2.yaml
(ii) Motion Customization
python inference.py --cfg configs/dreamvideo/infer/motion_carTurn.yaml
For inference with appearance guidance, make sure to add images of foreground objects (e.g., any image of a bear) to the folder data/images/motionReferenceImgs
and modify your test file.
Tips:
- Try setting
appearance_guide_strength_cond
andappearance_guide_strength_uncond
from0 to 1
to adjust the effect of appearance guidance during inference. - We do not use DDIM Inversion by default. However, for single video motion customization, you can try setting
inverse_noise_strength
to0~0.5
to better align the training video. For multi-video motion customization, we recommend settinginverse_noise_strength
to0
.
(iii) Joint Customization
python inference.py --cfg configs/dreamvideo/infer/joint_dog2_carTurn.yaml
Tips:
- Try changing
identity_adapter_index
andmotion_adapter_index
for better results. Typically, increasing identity_adapter_index improves identity preservation, while increasing motion_adapter_index enhances motion alignment. Balance the two for optimal results.
We provide some examples for inference. Before you start, make sure you download the models.
(i) Subject Customization
python inference.py --cfg configs/dreamvideo/infer/examples/subject_dog2.yaml
python inference.py --cfg configs/dreamvideo/infer/examples/subject_wolf_plushie.yaml
Subject | Generated Video | Subject | Generated Video |
dog | "a * eating pizza" seed: 2767 |
wolf plushie | "a * running in the forest" seed: 2339 |
(ii) Motion Customization
python inference.py --cfg configs/dreamvideo/infer/examples/motion_carTurn.yaml
python inference.py --cfg configs/dreamvideo/infer/examples/motion_playingGuitar.yaml
Motion | Generated Video | Motion | Generated Video |
"a car running on the road" | "a lion running on the road" seed: 8888 |
"a person is playing guitar" | "a monkey is playing guitar on Mars" seed: 8888 |
(iii) Joint Customization
python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_carTurn.yaml
python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_playingGuitar.yaml
python inference.py --cfg configs/dreamvideo/infer/examples/joint_wolf_plushie_carTurn.yaml
python inference.py --cfg configs/dreamvideo/infer/examples/joint_wolf_plushie_playingGuitar.yaml
(i) Download model:
!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/tf-t2v', cache_dir='models/')
Then you might need the following command to move the checkpoints to the "models/" directory:
mv ./models/iic/tf-t2v/* ./models/
(ii) We provide a config file for generating 16-frame video with 448x256 resolution. The command is as follows:
python inference.py --cfg configs/tft2v_t2v_infer.yaml
(If there are environmental problems during operation, we also provide the environment configuration "tft2v_environment.yaml" of TF-T2V for your reference.)
In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/text_list_for_tft2v
directory.
Then you can execute the following command to perform super-resolution on the generated videos:
python inference.py --cfg configs/tft2v_16frames_sr600_infer.yaml
Finally, you can retrieve the high-definition video from the workspace/experiments/text_list_for_tft2v
directory.
(It should be noted that the super-resolution model only supports 32-frame input, and 16-frame video cannot be used, thus we construct a pseudo 32-frame video by copying frames.)
Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.
Click HERE to view the generated video. |
Click HERE to view the generated video. |
(iii) Additionally, you can run the following command for text-to-video generation (32 frames):
python inference.py --cfg configs/tft2v_t2v_32frames_infer.yaml
In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/text_list_for_tft2v_32frame
directory.
Then you can execute the following command to perform super-resolution on the generated videos:
python inference.py --cfg configs/tft2v_32frames_sr600_infer.yaml
Finally, you can retrieve the high-definition video from the workspace/experiments/text_list_for_tft2v_32frame
directory.
(It should be noted that the super-resolution model only supports 32-frame input, and 16-frame video cannot be used.)
Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.
Click HERE to view the generated video. |
Click HERE to view the generated video. |
(iv) Run the following command for compositional video generation like videocomposer (32 frames):
python inference.py --cfg configs/tft2v_vcomposer_32frames_infer.yaml
In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/vid_list_vcomposer_32frame
directory.
Then you can execute the following command to perform super-resolution on the generated videos:
python inference.py --cfg configs/tft2v_vcomposer_32frames_sr600_infer.yaml
Finally, you can retrieve the high-definition video from the workspace/experiments/vid_list_vcomposer_32frame
directory.
Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.
Click HERE to view the generated video. |
Click HERE to view the generated video. |
(v) We also provide a config file for generating 16-frame video with 448x256 resolution under the compositional video synthesis setting. The command is as follows:
python inference.py --cfg configs/tft2v_vcomposer_infer.yaml
You can also generate a 16-frame video with 896x512 resolution within one model by running:
python inference.py --cfg configs/tft2v_vcomposer_896x512_infer.yaml
It should be noted that the super-resolution model only supports 32-frame input, and 16-frame video cannot be used.
(i) Download models as in TF-T2V (if you have already downloaded them in TF-T2V, skip this step):
!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/tf-t2v', cache_dir='models/')
Then you might need the following command to move the checkpoints to the "models/" directory:
mv ./models/iic/tf-t2v/* ./models/
(ii) Run the following command for text-to-video generation (16 frames with 448x256 resolution):
python inference.py --cfg configs/videolcm_t2v_infer.yaml
To generate high-resolution videos (1280x720 resolution), you can run the following command:
python inference.py --cfg configs/videolcm_t2v_16frames_sr600_infer.yaml
Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.
Click HERE to view the generated video. |
Click HERE to view the generated video. |
(iii) Run the following command for compositional video generation (16 frames with 448x256 resolution):
python inference.py --cfg configs/videolcm_vcomposer_infer.yaml
Feel free to reach out (hj.yuan@zju.edu.cn) if have questions.
The training of InstructVideo requires video-text pairs to save computational cost during reward fine-tuning. In the paper, we utilize a small set of videos in WebVid to fine-tune our base model. The file list is shown under the folder:
data/instructvideo/webvid_simple_animals_2_selected_20_train_file_list/00000.txt
You should try filtering the videos from your webvid dataset to compose the training data. Another alternative is to use your own video-text pairs. (I tested InstructVideo on WebVid data and some proprietary data. Both worked.)
Concerning the environment configuration, you should follow the instructions for VGen installation.
!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/InstructVideo', cache_dir='models/')
You need to move the checkpoints to the "models/" directory:
mv ./models/iic/InstructVideo/* ./models/
Note that models/model_scope_v1-4_0600000.pth
is the pre-trained base model used in the paper.
The fine-tuned model is placed under the folder models/instructvideo-finetuned
.
You can get access to the provided files on Instructvideo ModelScope Page.
You can leverage the provided fine-tuned checkpoints to generate videos by running the command:
bash configs/instructvideo/eval_generate_videos.sh
This command uses yaml files under configs/instructvideo/eval
, containing caption file paths for generating videos of in-domain animals, new animals and non-animals.
Feel free to switch among them or replace them with your own captions.
Although we fine-tuned using 20-step DDIM, you can still use 50-step DDIM generation.
You can perform InstrcutVideo reward fine-tuning by running the command:
bash configs/instructvideo/train.sh
Since performing reward fine-tuning can lead to over-optimization, I strongly recommend checking the generation performance on some evaluation captions regularly (like the captions indicated in configs/instructvideo/eval
).
In preparation!!
Our codebase essentially supports all the commonly used components in video generation. You can manage your experiments flexibly by adding corresponding registration classes, including ENGINE, MODEL, DATASETS, EMBEDDER, AUTO_ENCODER, VISUAL, DIFFUSION, PRETRAIN
, and can be compatible with all our open-source algorithms according to your own needs. If you have any questions, feel free to give us your feedback at any time.
If this repo is useful to you, please cite our corresponding technical paper.
@article{wang2023videocomposer,
title={Videocomposer: Compositional Video Synthesis with Motion Controllability},
author={Wang, Xiang and Yuan, Hangjie and Zhang, Shiwei and Chen, Dayou and Wang, Jiuniu and Zhang, Yingya and Shen, Yujun and Zhao, Deli and Zhou, Jingren},
journal={NeurIPS},
volume={36},
year={2023}
}
@article{2023i2vgenxl,
title={I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models},
author={Zhang, Shiwei and Wang, Jiayu and Zhang, Yingya and Zhao, Kang and Yuan, Hangjie and Qing, Zhiwu and Wang, Xiang and Zhao, Deli and Zhou, Jingren},
booktitle={arXiv preprint arXiv:2311.04145},
year={2023}
}
@article{wang2023modelscope,
title={Modelscope text-to-video technical report},
author={Wang, Jiuniu and Yuan, Hangjie and Chen, Dayou and Zhang, Yingya and Wang, Xiang and Zhang, Shiwei},
journal={arXiv preprint arXiv:2308.06571},
year={2023}
}
@inproceedings{dreamvideo,
title={DreamVideo: Composing Your Dream Videos with Customized Subject and Motion},
author={Wei, Yujie and Zhang, Shiwei and Qing, Zhiwu and Yuan, Hangjie and Liu, Zhiheng and Liu, Yu and Zhang, Yingya and Zhou, Jingren and Shan, Hongming},
booktitle={CVPR},
year={2024}
}
@inproceedings{higen,
title={Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation},
author={Qing, Zhiwu and Zhang, Shiwei and Wang, Jiayu and Wang, Xiang and Wei, Yujie and Zhang, Yingya and Gao, Changxin and Sang, Nong },
booktitle={CVPR},
year={2024}
}
@article{wang2023videolcm,
title={VideoLCM: Video Latent Consistency Model},
author={Wang, Xiang and Zhang, Shiwei and Zhang, Han and Liu, Yu and Zhang, Yingya and Gao, Changxin and Sang, Nong },
journal={arXiv preprint arXiv:2312.09109},
year={2023}
}
@article{ma2023dreamtalk,
title={DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models},
author={Ma, Yifeng and Zhang, Shiwei and Wang, Jiayu and Wang, Xiang and Zhang, Yingya and Deng Zhidong},
journal={arXiv preprint arXiv:2312.09767},
year={2023}
}
@inproceedings{InstructVideo,
title={InstructVideo: Instructing Video Diffusion Models with Human Feedback},
author={Yuan, Hangjie and Zhang, Shiwei and Wang, Xiang and Wei, Yujie and Feng, Tao and Pan, Yining and Zhang, Yingya and Liu, Ziwei and Albanie, Samuel and Ni, Dong},
booktitle={CVPR},
year={2024}
}
@inproceedings{TFT2V,
title={A Recipe for Scaling up Text-to-Video Generation with Text-free Videos},
author={Wang, Xiang and Zhang, Shiwei and Yuan, Hangjie and Qing, Zhiwu and Gong, Biao and Zhang, Yingya and Shen, Yujun and Gao, Changxin and Sang, Nong},
booktitle={CVPR},
year={2024}
}
We would like to express our gratitude for the contributions of several previous works to the development of VGen. This includes, but is not limited to Composer, ModelScopeT2V, Stable Diffusion, OpenCLIP, WebVid-10M, LAION-400M, Pidinet and MiDaS. We are committed to building upon these foundations in a way that respects their original contributions.
This open-source model is trained with using WebVid-10M and LAION-400M datasets and is intended for RESEARCH/NON-COMMERCIAL USE ONLY.