Skip to content

98mxr/GMFSS_Fortuna

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GMFSS_Fortuna

The All-In-One GMFSS: Dedicated for Anime Video Frame Interpolation


2023-06-25: Thanks to AnimeRun's related work, we have updated one of union's fine-tune models.


  • The optimised training process is more stable.
  • We offer several models for inference or as pre-training models for finetuning.

Installation

Our code is developed based on PyTorch 1.13.1, CUDA 11.8 and Python 3.9. Lower version pytorch should also work well.

To install, run the following commands:

git clone https://github.com/98mxr/GMFSS_Fortuna.git
cd GMFSS_Fortuna
pip install -r requirements.txt

If you are using CUDA 12.x, change cupy-cuda11x to cupy-cuda12x in requirements.txt. Do not install cupy-cuda11x and cupy-cuda12x at the same time!

Model Zoo

If you want to validate the results then you need the GMFSS model or union model

Or try this new union model using anime optical flow data fine-tune

If you want to train your own model, you can use our pre-trained model to skip the baseline training process

Run Video Frame Interpolation

  • Unzip the downloaded models and place the train_log folder in the root directory. Then run one of the following commands.
  1. Using gmfss mode
python3 inference_video.py --img=demo/ --scale=1.0 --multi=2
  1. Using union mode
python3 inference_video.py --img=demo/ --scale=1.0 --multi=2 --union

Train

  • Unzip the pre-trained models and place the train_log folder as well as dataset in the root directory. Modifying model/dataset.py is necessary to fit other datasets. Run one of the following commands.
  1. Train gmfss with gan optimization
python3 train_pg.py
  1. Train gmfss_union with gan optimization
python3 train_upg.py
  1. Train pre-trained models
python3 train_nb.py

Acknowledgment

This project is supported by SVFI Development Team