Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field
MSI-NeRF
Dongyu Yan, Guanyu Huang, Fengyu Quan and Haoyao Chen
WACV 2025
This repository contains code for the paper MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field..
conda env create -f environment.yml
conda activate msinerf
The Replica360 dataset proposed in the paper can be downloaded here (Only a sample case for testing, full version coming soon!)
The checkpoint can be downloaded here
python train.py --config ./config/default.txt --gpus 4 --batch_size 1 --split_ratio 0.95 --max_epoch 30
- Run depth map generation (along with color map generation in rig center)
python test.py --config ./config/default.txt --gpus 1 --batch_size 1 --ckpts_epoch 29 --split_ratio 0.95
- Run color map generation in NVS dataset
python test.py --config ./config/default.txt --gpus 1 --batch_size 1 --ckpts_epoch 29 --eval_nvs
- Run novel view synthesis using predefined trajectory
python test.py --config ./config/default.txt --gpus 1 --batch_size 1 --ckpts_epoch 29 --traj_type 0 --render_novel_view --eval_nvs (optional)
If you find this paper useful, please cite:
@article{yan2024msi,
title={MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field},
author={Yan, Dongyu and Huang, Guanyu and Quan, Fengyu and Chen, Haoyao},
journal={arXiv preprint arXiv:2403.10840},
year={2024}
}