Skip to content

UV-free Texture Diffusion (UV3-TeD) is a denoising diffusion probabilistic model constrained to operate on the surface of 3D objects and capable of generating textures as coloured point-clouds.

License

Notifications You must be signed in to change notification settings

simofoti/UV3-TeD

Repository files navigation

UV-free Mesh Texture Generation with Denoising and Heat Diffusion

This repository provides the official implementation of UV3-TeD:

Simone Foti, Stefanos Zafeiriou, Tolga Birdal
Imperial College London

UV3-TeD

arXiv Website Website License: CC BY 4.0

Installation

We suggest creating a mamba environment, but conda can be used as well by simply replacing mamba with conda.

To create the environment, open a terminal and type:

mamba create -n uv3-ted

Then activate the environment with:

mamba activate uv3-ted

Then run the the following commands to install the necessary dependencies:

mamba install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
mamba install pyg -c pyg
mamba install pytorch-scatter pytorch-cluster pytorch-sparse -c pyg

pip install diffusers["torch"]
pip install mitsuba

pip install trimesh Pillow rtree
pip install "pyglet<2"
pip install scipy robust_laplacian polyscope pandas point-cloud-utils
pip install func_timeout tb-nightly npyvista

If you want to evaluate the performance of the model run also the following:

pip install clean-fid lpips

Datasets

Download instructions should be automatically printed when launching the code if data are not found or automatic download is not implemented.

Permissions to download the data may be required. Please, refer to the ShapeNet and Amazon Berkeley Objects (ABO) dataset websites for more information.

Prepare Your Configuration File

We made available a configuration file for each experiment. Make sure the paths in the config file are correct. In particular, you might have to change root according to where the data were downloaded.

Train and Test

After cloning the repo open a terminal and go to the project directory. Ensure that your mamba/conda environment is active.

To start the training from the project repo simply run:

python train.py --config=configs/<A_CONFIG_FILE>.yaml --id=<NAME_OF_YOUR_EXPERIMENT>

Basic tests will automatically run on the validation set at the end of the training. If you wish to run experiment on the test set or to run other experiments you can uncomment any function call at the end of test.py. If your model has alredy been trained or you are using our pretrained model, you can run tests without training:

python test.py --id=<NAME_OF_YOUR_EXPERIMENT>

Note that NAME_OF_YOUR_EXPERIMENT is also the name of the folder containing the pretrained model.

The following parameters can also be used:

  • --output_path=<PATH>: path to where outputs are going to be stored.
  • --processed_dir_name=<PATH>: relative path to where all the preprocessed files are going to be stored. This path is relative to the folder where your data are stored.
  • --resume: resume the training (available only when launching train.py).
  • --profile: run a few training steps to profile model performance (available only when launching train.py).
  • --batch_size=<n>: overrides the batch size specified in the config file, (available only when launching test.py).

Run LPIPS

Lpips can be used launching a simple script from the LPIPS library. After running the tests follow these steps:

Clone the LPIPS repo and cd into it:

git clone https://github.com/richzhang/PerceptualSimilarity.git
cd ./PerceptualSimilarity

Then run:

python lpips_2dirs.py -d0 <PATH_TO_A_DIR_CONTAINING_A_SET_OF_RENDERED_SHAPES> -d1 <PATH_TO_ANOTHER_DIR_CONTAINING_A_SET_OF_RENDERED_SHAPES> -o <PATH_TO_OUT_TXT_FILE> --use_gpu

Pretrained Model

The weights of the pretrained models are downloadable here.

Citing This Work

@article{foti2024uv3ted,
    title={UV-free Texture Generation with Denoising and Geodesic Heat Diffusions},
    author={Foti, Simone and Zafeiriou, Stefanos and Birdal, Tolga},
    journal = {Advances in Neural Information Processing Systems},
    year={2024}
  }

About

UV-free Texture Diffusion (UV3-TeD) is a denoising diffusion probabilistic model constrained to operate on the surface of 3D objects and capable of generating textures as coloured point-clouds.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages