Skip to content

This is a repository containing the code of NeRF editing based on point deformation

License

Notifications You must be signed in to change notification settings

dehezhang2/Point_Based_NeRF_Editing

Repository files navigation

Point-Based Radiance Fields for Controllable Human Motion Synthesis

Authors:Deheng Zhang*, Haitao Yu*, Peiyuan Xie*, Tianyi Zhang*

This is a repository containing the official implementation of Point-Based Radiance Fields for Controllable Human Motion Synthesis.

Overview

pipeline

Our method exploits the explicit point cloud to train the static 3D scene and apply the deformation by encoding the point cloud translation using a deformation MLP. To make sure the rendering result is consistent with the canonical space training, we estimate the local rotation using SVD and interpolate the per-point rotation to the query view direction of the pre-trained radiance field. Extensive experiments show that our approach can significantly outperform the state-of-the-art on fine-level complex deformation which can be generalized to other 3D characters besides humans.

Demo

video_200000_coarse_raycolor spiderman_raybending dragon

Overall Instruction

  1. Please first install the libraries as below and download/prepare the datasets as instructed.
  2. Point Initialization: Download pre-trained MVSNet as below and train the feature extraction from scratch or directly download the pre-trained models. (Obtain MVSNet and init folder in checkpoints folder)
  3. Per-scene Optimization: Download pre-trained models or optimize from scratch as instructed.

Installation

Requirements

All the codes are tested in the following environment: Python 3.8; Ubuntu 20.04; CUDA > 11.7.

Install

  • Install the environment from yml:

    conda env create -f environment.yml
  • Install pytorch3d

    conda activate point-nerf-editing
    pip install fvcore iopath
    pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py39_cu117_pyt1131/download.html

Data Preparation

We provide all data folder here: polybox, please put the folders in the following directory.

pointnerf
├── data_src
│   ├── nerf
    │   │   │──nerf_synthetic

Alternatively, you can follow the instruction in PointNeRF-Assistant to create your own dataset, the data format should be the same as nerf_sythetic dataset.

Initialization and Optimization:

Download pre-trained MVSNet checkpoints:

We trained MVSNet on DTU. You can Download ''MVSNet'' directory from google drive and place them under checkpoints/.

Download per-scene optimized Point-NeRFs

You can skip training and download the checkpoint folders of nerfsynth here polybox, and place them in the following directory.

pointnerf
├── checkpoints
│   ├── init
    ├── MVSNet
    ├── nerfsynth

In each scene, we provide points and weights at 200K steps 200000_net_ray_marching.pth.

Canonical Scene Optimization

train scripts
    bash dev_scripts/w_n360/dragon_cuda.sh
    bash dev_scripts/w_n360/gangnam_cuda.sh
    bash dev_scripts/w_n360/human_cuda.sh
    bash dev_scripts/w_n360/phoenix_cuda.sh
    bash dev_scripts/w_n360/robot_cuda.sh
    bash dev_scripts/w_n360/samba_cuda.sh
    bash dev_scripts/w_n360/spiderman_cuda.sh
    bash dev_scripts/w_n360/turtle_cuda.sh
    bash dev_scripts/w_n360/woman_cuda.sh

Point Cloud Deformation

deformation scripts
    bash dev_scripts/w_n360/dragon_deform.sh
    bash dev_scripts/w_n360/gangnam_deform.sh
    bash dev_scripts/w_n360/human_deform.sh
    bash dev_scripts/w_n360/phoenix_deform.sh
    bash dev_scripts/w_n360/robot_deform.sh
    bash dev_scripts/w_n360/samba_deform.sh
    bash dev_scripts/w_n360/spiderman_deform.sh
    bash dev_scripts/w_n360/turtle_deform.sh
    bash dev_scripts/w_n360/woman_deform.sh

Notes on the configuration in deform.sh

ray_bend=0 # 0: no bending; 1: use ray bending
sample_num -1 # -1: use whole set of keypoints; 0~1: ratio from the original keypoint; >1: number of keypoints

Reference

@misc{yu2023pointbased,
      title={Point-Based Radiance Fields for Controllable Human Motion Synthesis}, 
      author={Haitao Yu and Deheng Zhang and Peiyuan Xie and Tianyi Zhang},
      year={2023},
      eprint={2310.03375},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

Our repo is developed based on MVSNet, PointNeRF, DPF. Please also consider citing the corresponding papers. We thank our supervisor Dr. Sergey Prokudin from Computer Vision and Learning Group ETH Zurich for the help and tons of useful advice for this project.

LICENSE

The code is released under the GPL-3.0 license.

About

This is a repository containing the code of NeRF editing based on point deformation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published