Image attribution: 3D models from Sketchfab Fantasy House by LowlyPoly, Stuffed Dino Toy by Andrey.Chegodaev.
Diffusion Texture Painting
Anita Hu,
Nishkrit Desai,
Hassan Abu Alhaija,
Seung Wook Kim,
Masha Shugrina
Paper, Project Page
Abstract: We present a technique that leverages 2D generative diffusion models (DMs) for interactive texture painting on the surface of 3D meshes. Unlike existing texture painting systems, our method allows artists to paint with any complex image texture, and in contrast with traditional texture synthesis, our brush not only generates seamless strokes in real-time, but can inpaint realistic transitions between different textures. To enable this application, we present a stamp-based method that applies an adapted pre-trained DM to inpaint patches in local render space, which is then projected into the texture image, allowing artists control over brush stroke shape and texture orientation. We further present a way to adapt the inference of a pre-trained DM to ensure stable texture brush identity, while allowing the DM to hallucinate infinite variations of the source texture. Our method is the first to use DMs for interactive texture painting, and we hope it will inspire work on applying generative models to highly interactive artist-driven workflows.
For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing.
Verified on Linux Ubuntu 20.04.
This module provides the training script for finetuning a pre-trained stable-diffusion inpainting model to support image-conditioning via a custom image encoder using LoRA. The resulting image encoder checkpoint and LoRA weights will be needed in the next module.
To train the model from scratch, follow the instructions here.
Download the pretrained models and unzip into the following folder
cd trt_inference
wget https://nvidia-caat.s3.us-east-2.amazonaws.com/diffusion_texture_painting_model.zip
unzip diffusion_texture_painting_model.zip
This module accelerate the diffusion model inference speed using TensorRT. The model inference is isolated in a docker container and communicates with the Texture Painting App via websocket.
Install nvidia-docker using these intructions.
Build docker image
cd trt_inference
docker build . -t texture-painter
Launch docker (first time will take longer to build the trt model)
cd trt_inference
mkdir engine # cache the built trt model files
docker run -it --rm --gpus all -p 6060:6060 -v $PWD/engine:/workspace/engine texture-painter
Wait until you see "TRTConditionalInpainter ready", that means it has successfully built the trt model. Then you can exit and continue below.
This module contains the app for texture painting on UV-mapped 3D meshes.
cd kit_app && bash build.sh
Option 1: Launch inference server and app separately
To paint with the diffusion model, ensure that the TRT inference server is running before launching the app.
bash launch_trt_server.sh
Launch kit application.
bash launch_app.sh
Option 2: Launch together
With tmux installed, launch the inference server and app at the same time.
bash launch_all.sh
For how to use the app, refer to the tutorial here.
The repository contains research code integrated into a kit application, based on the kit-app-template. All code under the kit_app folder is subject to the terms of Omniverse Kit SDK, with the exception of the subfolder kit_app/source/extensions/aitoybox.texture_painter, which is governed by NVIDIA Source Code License.
All code in the repository not under the kit_app folder is also subject to NVIDIA Source Code License.
@article{texturepainting2024,
author = {Hu, Anita and Desai, Nishkrit and Abu Alhaija, Hassan and Kim, Seung Wook
and Shugrina, Maria},
title = {Diffusion Texture Painting},
booktitle = {ACM SIGGRAPH 2024 Conference Proceedings},
year = {2024},
}