Where we develop an extensible and explicit radiance field model which can be used for static, dynamic, and variable appearance datasets.
Code release for:
K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbæk Warburg, Benjamin Recht, Angjoo Kanazawa
🚀 Cool videos here: Project page
📰 Paper here: Paper
We recommend setup with a conda environment using PyTorch for GPU (a high-memory GPU is not required). Training and evaluation data can be downloaded from the respective websites (NeRF, LLFF, DyNeRF, D-NeRF, Phototourism).
Our config files are provided in the configs
directory, organized by dataset and explicit vs. hybrid model version. These config files may be updated with the location of the downloaded data and your desired scene name and experiment name. To train a model, run
PYTHONPATH='.' python plenoxels/main.py --config-path path/to/config.py
Note that for DyNeRF scenes it is recommended to first run for a single iteration at 4x downsampling to pre-compute and store the ray importance weights, and then run as usual at 2x downsampling. This is not required for other datasets.
The main.py
script also supports rendering a novel camera trajectory, evaluating quality metrics, and rendering a space-time decomposition video from a saved model. These options are accessed via flags --render-only
, --validate-only
, and --spacetime-only
, and a saved model can be specified via --log-dir
.
@misc{sfk_kplanes_2023,
title={K-Planes for Radiance Fields in Space, Time, and Appearance},
author={Sara Fridovich-Keil and Giacomo Meanti and Frederik Rahbæk Warburg and Benjamin Recht and Angjoo Kanazawa},
year={2023},
eprint={2301.10241},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
This work is made available under the BSD 3-clause license. Click here to view a copy of the license.