Monocular Depth Estimation Using Relative Depth Maps
(Supplemental) Monocular Depth Estimation Using Relative Depth Maps
If you use our code or results, please cite:
@InProceedings{Lee_2019_CVPR,
author = {Lee, Jae-Han and Kim, Chang-Su},
title = {Monocular Depth Estimation Using Relative Depth Maps},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
You can download our trained caffemodel from the following link: default_mode_net.caffemodel
You should download 'nyu_depth_v2_labeled.mat' and 'splits.mat' files from official NYUv2 site: nyu_depth_v2_labeled.mat, splits.mat
The results of our algorithm of 654 test images of NYUv2 set are located in 'results/depth_map'. All depth maps are stored as png files, and each pixel consists of 16 bits of data. You can convert png files to depth values in the following ways:
png_depth = imread('depth_test001.png');
depth = double(png_depth) / (2^16-1) * 10;