tl;dr This is GANime, a model capable to generate video of anime content based on the first and last frame. This model is trained on a custom dataset based on the Kimetsu no Yaiba anime. It is composed of two model, a VQ-GAN for image generation, and a GPT2 transformer to generate the video frame by frame.
This project is a Master thesis realised by Farid Abdalla at HES-SO in partnership with Osaka Prefecture University (now renamed to Osaka Metropolitan University) in Japan. A PyTorch implementation is available on this repository.
All implementation details are available in this pdf.
For each pair of rows, the first row is the generated result and the second row is the ground truth.
For each pair of rows, the first row is the generated result and the second row is the ground truth.
Some results are quite surprising. For instance when the first and last frame are identical, the model is capable to generate some animations. For instance, some characters seems to be breathing even though the ground truth is still. When something appears suddenly (upper right video), the model made it appear with a fading effect.
The lower left picture with Zenitsu is interesting: it seems that the VQ-GAN learned that when generating an eye, it must put a pupil inside it, so generating a white eye did not make sense for the model.
For the clock (bottom-middle), the generated video moves the clock arms even though the first and last pictures are identical.
Dataset | Link |
---|---|
Kimetsu no Yaiba | link |
Dataset | Link |
---|---|
Kimetsu no Yaiba | link |
Dataset | Link |
---|---|
Kimetsu no Yaiba | link |
Dataset | Model | Link |
---|---|---|
MovingMNIST | moving_mnist_image.yaml | link |
Kimetsu no Yaiba | kny_image_full_vgg19.yaml | link |
Dataset | Model | Link |
---|---|---|
Kimetsu no Yaiba | kny_video_gpt2_medium.yaml.yaml | link |
Instructions on how to train / generate will come later