This repo provides the inference Gradio demo for Hybrid (Trajectory + Landmark) Control of MOFA-Video.
git clone https://github.com/MyNiuuu/MOFA-Video.git
cd ./MOFA-Video
The demo has been tested on CUDA version of 11.7.
cd ./MOFA-Video-Hybrid
conda create -n mofa python==3.10
conda activate mofa
pip install -r requirements.txt
pip install opencv-python-headless
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
IMPORTANT: requirements.txt
should be strictly followed since other versions may cause errors.
-
Download the checkpoint of CMP from here and put it into
./MOFA-Video-Hybrid/models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints
. -
Download the
ckpts
folder from the huggingface repo which contains necessary pretrained checkpoints and put it under./MOFA-Video-Hybrid
. You may usegit lfs
to download the entireckpts
folder:-
Download
git lfs
from https://git-lfs.github.com. It is commonly used for cloning repositories with large model checkpoints on HuggingFace.NOTE: If you encounter the error
git: 'lfs' is not a git command
on Linux, you can try this solution that has worked well for my case. -
Execute
git clone https://huggingface.co/MyNiuuu/MOFA-Video-Hybrid
to download the complete HuggingFace repository, which includes theckpts
folder. -
Copy or move the
ckpts
folder to the GitHub repository.
Finally, the checkpoints should be orgnized as
./MOFA-Video-Hybrid/ckpt_tree.md
. -
cd ./MOFA-Video-Hybrid
python run_gradio_audio_driven.py
🪄🪄🪄 The Gradio Interface is displayed as below. Please refer to the instructions on the gradio interface during the inference process!
cd ./MOFA-Video-Hybrid
python run_gradio_video_driven.py
🪄🪄🪄 The Gradio Interface is displayed as below. Please refer to the instructions on the gradio interface during the inference process!
We use SadTalker and AniPortrait to generate the landmarks in this demo. We sincerely appreciate their code and checkpoint release.