Skip to content

MIMAFace2024/MIMAFace

Repository files navigation

MIMAFace: Face Animation via Motion-Identity Modulated Appearance Feature Learning

Installation

pip install -r requirements.txt

Download Models

You can download both code and models of MIMAFace directly from here:

# Download the whole project containing all code and weight files 
from huggingface_hub import snapshot_download
snapshot_download(repo_id="MIMAFace/MIMAFace", local_dir="./MIMAFace")

To run the demo, you should also download the pre-trained SD models below:

Inference

Reenacted by a single image:

python infer_image.py \
        --source_path ./examples/source/bengio.jpg \
		--target_path ./examples/target/0000025.jpg \
		--output_path ./examples/result

Reenacted by a video:

python infer_video.py \
        --source_path ./examples/source/bengio.jpg \
		--target_path ./examples/target/id10291#TMCTm7GxiDE#000181#000465.mp4 \
		--output_dir ./examples/result

Training

Preparing Dataset

We convert the training datasets (Voxceleb2/VFHQ) into tfrecords files and put it to datasets/voxceleb2.

The meta file voxceleb2_tfrecord_train_list.txt contains items like

/path/to/tfrecord             offset       image_id 
tfrecord/voxceleb2_2.tfrecord 102797351015 train_id04128#5wFKqF1MVos#004810#005018

We extract the face landmarks and parsing masks (not used) of the images in datasets in advance and save it to tfrecord file, so an tfrecord file contains image, mask, pts:

image, mask, pts = get_tfrecord_item(tffile, offset)

Start Training

# Stage 1: Train on images (4 * A100 * ~1day)
sh start_train_image.sh

# Stage 2: Train on Videos (4 * A100 * ~1day)
sh start_train_video.sh

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published