Skip to content

Latest commit

 

History

History
125 lines (89 loc) · 6.24 KB

README.md

File metadata and controls

125 lines (89 loc) · 6.24 KB

AniDoc: Animation Creation Made Easier

anidoc_video.mp4

AniDoc: Animation Creation Made Easier

Yihao Meng1,2, Hao Ouyang2, Hanlin Wang3,2, Qiuyu Wang2, Wen Wang4,2, Ka Leong Cheng1,2 , Zhiheng Liu5, Yujun Shen2, Huamin Qu†,2
1HKUST 2Ant Group 3NJU 4ZJU 5HKU corresponding author

AniDoc colorizes a sequence of sketches based on a character design reference with high fidelity, even when the sketches significantly differ in pose and scale.

Strongly recommend seeing our demo page.

Showcases:

GIF

GIF

GIF

GIF

Flexible Usage:

Same Reference with Varying Sketches

GIF Animation GIF Animation GIF Animation
Satoru Gojo from Jujutsu Kaisen

Same Sketch with Different References.

GIF Animation GIF Animation GIF Animation
Anya Forger from Spy x Family

TODO List

  • Release the paper and demo page. Visit https://yihao-meng.github.io/AniDoc_demo/
  • Release the inference code.
  • Build Gradio Demo
  • Release the training code.
  • Release the sparse sketch setting interpolation code.

Requirements:

The training is conducted on 8 A100 GPUs (80GB VRAM), the inference is tested on RTX 5000 (32GB VRAM). In our test, the inference requires about 14GB VRAM.

Setup

git clone https://github.com/yihao-meng/AniDoc.git
cd AniDoc

Environment

All the tests are conducted in Linux. We suggest running our code in Linux. To set up our environment in Linux, please run:

conda create -n anidoc python=3.8 -y
conda activate anidoc

bash install.sh

Checkpoints

  1. please download the pre-trained stable video diffusion (SVD) checkpoints from here, and put the whole folder under pretrained_weight, it should look like ./pretrained_weights/stable-video-diffusion-img2vid-xt
  2. please download the checkpoint for our Unet and ControlNet from here, and put the whole folder as ./pretrained_weights/anidoc.
  3. please download the co_tracker checkpoint from here and put it as ./pretrained_weights/cotracker2.pth.

Generate Your Animation!

To colorize the target lineart sequence with a specific character design, you can run the following command:

bash  scripts_infer/anidoc_inference.sh

We provide some test cases in data_test folder. You can also try our model with your own data. You can change the lineart sequence and corresponding character design in the script anidoc_inference.sh, where --control_image refers to the lineart sequence and --ref_image refers to the character design.

You should input a color video as --control_image and our code will extract sketch for each frame as the control signal.

Currently our model expects 14 frames video as input, so if you want to colorize your own lineart sequence, you should preprocess it into 14 frames. You can use process_video_to_14frame.py to preprocess your own video, it will select 14 frames uniformly.

However, in our test, we found that in most cases our model works well for more than 14 frames (72 frames). If you want to test our model's performance on arbitrary input frames, you can slightly modify the inference code by replace the 14 and args.num_frames with the input video frame number.

Hugging face demo

fffiloni builds a quick gradio demo for AniDoc, at here, Thanks for his contribution!

Because our model expects 14 frames video as input, when you load a control video more than 14 frames, it will raise error. For now you can use process_video_to_14frame.py to preprocess your own video, it will select 14 frames uniformly. We will update the gradio demo to automate this soon.

Citation:

Don't forget to cite this source if it proves useful in your research!

@article{meng2024anidoc,
      title={AniDoc: Animation Creation Made Easier},
      author={Yihao Meng and Hao Ouyang and Hanlin Wang and Qiuyu Wang and Wen Wang and Ka Leong Cheng and Zhiheng Liu and Yujun Shen and Huamin Qu},
      journal={arXiv preprint arXiv:2412.14173},
      year={2024}
}