We present Tex4D, a zero-shot approach that integrates inherent 3D geometry knowledge from mesh sequences with the expressiveness of video diffusion models to produce multi-view and temporally consistent 4D textures. Given an untextured mesh sequence and a text prompt as inputs, our method generates multi-view, temporally consistent 4D textures.
- Technical Report
- Release inference code
- Release data preprocess code
Please first run following commands to build dependencies:
git clone https://github.com/ZqlwMatt/Tex4D.git
cd Tex4D
conda create -n tex4d python=3.8
conda activate tex4d
pip install -r requirements.txt
Then install PyTorch3D through the following URL (check and replace your CUDA verison by running pytorch3d_install.py
)
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu117_pyt200/download.html
Generate the conditioning data for the video diffusion model based on the provided mesh sequences.
python visualize.py --render --data_folder "anim/boo" --pose_dir "pose_3" --load_from_data
python run.py --config data/boo/config.yaml
For more see our project webpage.
gallery_boo2_2.mp4
gallery_boo1_2.mp4
gallery_snowman2.mp4
@article{bao2024tex4d,
title={Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models},
author={Jingzhi Bao and Xueting Li and Ming-Hsuan Yang},
journal={arXiv preprint arxiv:2410.10821},
year={2024}
}