The All-In-One GMFSS: Dedicated for Anime Video Frame Interpolation
2023-06-25: Thanks to AnimeRun's related work, we have updated one of union's fine-tune models.
- The optimised training process is more stable.
- We offer several models for inference or as pre-training models for finetuning.
Our code is developed based on PyTorch 1.13.1, CUDA 11.8 and Python 3.9. Lower version pytorch should also work well.
To install, run the following commands:
git clone https://github.com/98mxr/GMFSS_Fortuna.git
cd GMFSS_Fortuna
pip install -r requirements.txt
If you are using CUDA 12.x, change cupy-cuda11x to cupy-cuda12x in requirements.txt. Do not install cupy-cuda11x and cupy-cuda12x at the same time!
If you want to validate the results then you need the GMFSS model or union model
Or try this new union model using anime optical flow data fine-tune
If you want to train your own model, you can use our pre-trained model to skip the baseline training process
- Unzip the downloaded models and place the
train_log
folder in the root directory. Then run one of the following commands.
- Using gmfss mode
python3 inference_video.py --img=demo/ --scale=1.0 --multi=2
- Using union mode
python3 inference_video.py --img=demo/ --scale=1.0 --multi=2 --union
- Unzip the pre-trained models and place the
train_log
folder as well as dataset in the root directory. Modifyingmodel/dataset.py
is necessary to fit other datasets. Run one of the following commands.
- Train gmfss with gan optimization
python3 train_pg.py
- Train gmfss_union with gan optimization
python3 train_upg.py
- Train pre-trained models
python3 train_nb.py
This project is supported by SVFI Development Team