This repository contains the implementation of the CVPR 2022 paper: Dense Depth Priors for Neural Radiance Fields from Sparse Input Views.
Arxiv | Video | Project Page
You can skip this step and download the depth completion model trained on ScanNet from here.
Extract the ScanNet dataset e.g. using SenseReader and place the files scannetv2_test.txt
,
scannetv2_train.txt
, scannetv2_val.txt
from ScanNet Benchmark into the same directory.
Run the COLMAP feature extractor on all RGB images of ScanNet.
For this, the RGB files need to be isolated from the other scene data, f.ex. create a temporary directory tmp
and copy each <scene>/color/<rgb_filename>
to tmp/<scene>/color/<rgb_filename>
.
Then run:
colmap feature_extractor --database_path scannet_sift_database.db --image_path tmp
When working with different relative paths or filenames, the database reading in scannet_dataset.py
needs to be adapted accordingly.
Download the pretrained ResNet from here .
python3 run_depth_completion.py train --dataset_dir <path to ScanNet> --db_path <path to database> --pretrained_resnet_path <path to pretrained resnet> --ckpt_dir <path to write checkpoints>
Checkpoints are written into a subdirectory of the provided checkpoint directory. The subdirectory is named by the training start time in the format jjjjmmdd_hhmmss
, which also serves as experiment name in the following.
python3 run_depth_completion.py test --expname <experiment name> --dataset_dir <path to ScanNet> --db_path <path to database> --ckpt_dir <path to write checkpoints>
You can skip the scene preparation and directly download the scenes. To prepare a scene and render sparse depth maps from COLMAP sparse reconstructions, run:
cd preprocessing
mkdir build
cd build
cmake ..
make -j
./extract_scannet_scene <path to scene> <path to ScanNet>
The scene directory must contain the following:
train.csv
: list of training views from the ScanNet scenetest.csv
: list of test views from the ScanNet sceneconfig.json
: parameters for the scene:name
: name of the scenemax_depth
: maximal depth value in the scene, larger values are invalidateddist2m
: scaling factor that scales the sparse reconstruction to metersrgb_only
: write RGB only, f.ex. to get input for COLMAP
colmap
: directory containing 2 sparse reconstruction:sparse
: reconstruction run on train and test images together to determine the camera posessparse_train
, reconstruction run on train images alone to determine the sparse depth maps.
Please check the provided scenes as an example.
The option rgb_only
is used to preprocess the RGB images before running COLMAP. This cuts dark image borders from calibration, which harm the NeRF optimization. It is essential to crop them before running COLMAP to ensure that the determined intrinsics match the cropped RGB images.
python3 run_nerf.py train --scene_id <scene, e.g. scene0710_00> --data_dir <directory containing the scenes> --depth_prior_network_path <path to depth prior checkpoint> --ckpt_dir <path to write checkpoints>
Checkpoints are written into a subdirectory of the provided checkpoint directory. The subdirectory is named by the training start time in the format jjjjmmdd_hhmmss
, which also serves as experiment name in the following.
python3 run_nerf.py test --expname <experiment name> --data_dir <directory containing the scenes> --ckpt_dir <path to write checkpoints>
The test results are stored in the experiment directory.
Running python3 run_nerf.py test_opt ...
performs test time optimization of the latent codes before computing the test metrics.
python3 run_nerf.py video --expname <experiment name> --data_dir <directory containing the scenes> --ckpt_dir <path to write checkpoints>
The video is stored in the experiment directory.
If you find this repository useful, please cite:
@inproceedings{roessle2022depthpriorsnerf,
title={Dense Depth Priors for Neural Radiance Fields from Sparse Input Views},
author={Barbara Roessle and Jonathan T. Barron and Ben Mildenhall and Pratul P. Srinivasan and Matthias Nie{\ss}ner},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month={June},
year={2022}
We thank nerf-pytorch and CSPN, from which this repository borrows code.