Multi-View 3D reconstruction techniques process a set of source views and a reference view to yield an estimated depth map for the latter. Unluckily, state-of-the-art frameworks
- require to know a priori the depth range of the scene, in order to sample a set of depth hypotheses and build a meaningful cost volume.
- do not take into account the keyframes selection.
In this paper, we propose a novel framework free from prior knowledge of the scene depth range and capable of distinguishing the most meaningful source frames. The proposed method unlocks the capability to apply multi-view depth estimation to a wider range of scenarios like large-scale outdoor environments, top-view buildings and large-scale outdoor environments.
Our method relies on an iterative approach: starting from a zero-initialized depth map we extract geometrical correlation cues and update the prediction. At each iteration we feed also information extracted from the reference view only (the one on which we desire to compute depth). Moreover, at each iteration we use a different source view to exploit multi-view information in a round-robin fashion. For more details please refer to the paper.
@InProceedings{Conti_2024_3DV,
author = {Conti, Andrea and Poggi, Matteo and Cambareri, Valerio and Mattoccia, Stefano},
title = {Range-Agnostic Multi-View Depth Estimation With Keyframe Selection},
booktitle = {International Conference on 3D Vision},
month = {March},
year = {2024},
}
In this repo we provide evaluation code for our paper, it allows to load the pre-trained models on Blended and TartanAir and test them. Please note that we do not provide the source code of our models but only compiled binaries to perform inference.
Dependencies can be installed with conda
or mamba
as follows:
$ # first of all clone the repo and build the conda environment
$ git clone https://github.com/andreaconti/ramdepth.git
$ cd ramdepth
$ conda env create -f environment.yml # use mamba if conda is too slow
$ conda activate ramdepth
$ # then, download and install the wheel containing the pretrained models, available for linux, windows and macos
$ pip install https://github.com/andreaconti/ramdepth/releases/download/wheels%2Fv0.1.0/ramdepth-0.1.0-cp310-cp310-linux_x86_64.whl --no-deps
Then you can run the evaluate.ipynb to select the dataset and pre-trained model you want to test. Results may be slightly different with respect to the results in the main paper due to small differences in dataloaders and framework due to the packaging process.