- Tokyo, Japan
- https://twitter.com/anamorphobia
Highlights
- Pro
ML
Papers, repository and other data about anime or manga research. Please let me know if you have information that the list does not include.
Tensorflow implementation for "Improved Transformer for High-Resolution GANs" (NeurIPS 2021).
WACV2022: Transfer Learning for Pose Estimation of Illustrated Characters
An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]
Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" (Arxiv 2020) and "Predicting Personalized Head Movement From Short Video and Speech Signal" (TMM 2022)
🕹️ Official Implementation of Conditional Motion In-betweening (CMIB) 🏃
Instant neural graphics primitives: lightning fast NeRF and more
Deep 3DMM facial expression parameter extraction
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" (AIM ECCVW 2022) https://arxiv.org/abs/2201.13433
Demo programs for the Talking Head Anime from a Single Image 2: More Expressive project.
[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
Denoising of 3D human motion animations by latent space projection.
A PyTorch Library for Accelerating 3D Deep Learning Research
3D mesh stylization driven by a text input in PyTorch
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022)
[CVPR 2022] Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
iCartoonFace dataset, and baseline approaches, the project is supported by iQIYI
Pytorch Implementation of "Diagonal Attention and Style-based GAN for Content-Style disentanglement in image generation and translation" (ICCV 2021)
A motion generation model learned from a single example [SIGGRAPH 2022]
Code for the CVPR 2022 Paper - Style-ERD: Responsive and Coherent Online Motion Style Transfer
Official implementation of dual quaternion transformations as described in the paper "Pose Representations for Deep Skeletal Animation".