This repository contains the code associated with the publication
Deconstructing Self-Supervised Monocular Reconstruction: The Design Decisions that Matter
Jaime Spencer, Chris Russell, Simon Hadfield and Richard Bowden
We are currently also organizing the second edition of the Monocular Depth Estimation Challenge around the proposed SYNS-Patches dataset! This challenge will take place at MDEC@CVPR2023. Please check the website for details!
.git-hooks
: Dir containing a pre-commit hook for ignoring Jupyter Notebook outputs.api
: Dir containing main scripts for training, evaluating and data preparation.assets
Dir containing images used in README.cfg
Dir containing config files for training/evaluating.docker
Dir containing Dockerfile and Anaconda package requirements.data
*: (Optional) Dir containing datasets.hpc
: (Optional) Dir containing submission files to HPC clusters.models
*: (Optional) Dir containing trained model checkpoints.src
: Dir containing source code.tests
: Dir containing codebase tests (pytest)..gitignore
: File containing patterns ignored by Git.PATHS.yaml
*: File containing additional data & model roots.README.md
: This file!
*
Not tracked by Git!
For instructions on using this code, please refer to the respective READMEs:
Please note that this code has been tested using Python 3.9
and PyTorch 1.12
and may not work for earlier versions.
Remember to add the path to the repo to the PYTHONPATH
in order to run the code.
# Example for `bash`. Can be added to `~/.bashrc`.
export PYTHONPATH=/path/to/monodepth_benchmark:$PYTHONPATH
First, set up a GitHub pre-commit hook that stops us from committing Jupyter Notebooks with outputs, since they may potentially contain large images.
./.git-hooks/setup.sh
chmod +x .git/hooks/pre-commit # File sometimes isn't copied as executable. This should fix it.
If using Miniconda, create the environment and run commands as
conda env create --file docker/environment.yml
conda activate
python api/train/train.py ...
To instead build the Docker image, run
docker build -t monoenv ./docker
docker run -it \
--shm-size=24gb \
--gpus all \
-v $(pwd -P):$(pwd -P) \
-v /path/to/dataroot1:/path/to/dataroot1 \
--user $(id -u):$(id -g) \
monoenv:latest \
/bin/bash
python api/train/train.py ...
You can run the available pytest
tests as python -m pytest
.
Please note they are not extensive and only cover parts of the library.
However, these can be used to check if the datasets have been installed and preprocessed correctly.
The default locations for datasets and model checkpoints are ./data
& ./models
, respectively.
If you want to store them somewhere else, you can either create symlinks to them, or add additional roots.
This is done by creating the ./PATHS.yaml
file with the following contents:
# -----------------------------------------------------------------------------
MODEL_ROOTS:
- /path/to/modelroot1
DATA_ROOTS:
- /path/to/dataroot1
- /path/to/dataroot2
- /path/to/dataroot3
# -----------------------------------------------------------------------------
NOTE: Multiple roots may be useful if training in an HPC cluster where data has to be copied locally.
Roots should be listed in order of preference, i.e. dataroot1/kitti_raw_syns
will be given preference over dataroot2/kitti_raw_syns
.
If you used the code in this repository or found the paper interesting, please cite it as
@article{Spencer2022,
title={Deconstructing Self-Supervised Monocular Reconstruction: The Design Decisions that Matter},
author={Spencer, Jaime and Russell, Chris and Hadfield, Simon and Bowden, Richard},
journal={arXiv preprint arXiv:2208.01489},
year={2022}
}
We would also like to thank the authors of the papers below for their contributions and for releasing their code. Please consider citing them in your own work.
Tag | Title | Author | Conf | ArXiv | GitHub |
---|---|---|---|---|---|
Garg | Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue | Garg et. al | ECCV 2016 | ArXiv | GitHub |
Monodepth | Unsupervised Monocular Depth Estimation with Left-Right Consistency | Godard et. al | CVPR 2017 | ArXiv | GitHub |
Kuznietsov | Semi-Supervised Deep Learning for Monocular Depth Map Prediction | Kuznietsov et. al | CVPR 2017 | ArXiv | GitHub |
SfM-Learner | Unsupervised Learning of Depth and Ego-Motion from Video | Zhou et. al | CVPR 2017 | ArXiv | GitHub |
Depth-VO-Feat | Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction | Zhan et. al | CVPR 2018 | ArXiv | GitHub |
DVSO | Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry | Yang et. al | ECCV 2018 | ArXiv | |
Klodt | Supervising the new with the old: learning SFM from SFM | Klodt & Vedaldi | ECCV 2018 | CVF | |
MonoResMatch | Learning monocular depth estimation infusing traditional stereo knowledge | Tosi et. al | CVPR 2019 | ArXiv | GitHub |
DepthHints | Self-Supervised Monocular Depth Hints | Watson et. al | ICCV 2019 | ArXiv | GitHub |
Monodepth2 | Digging Into Self-Supervised Monocular Depth Estimation | Godard et. al | ICCV 2019 | ArXiv | GitHub |
SuperDepth | SuperDepth: Self-Supervised, Super-Resolved Monocular Depth Estimation | Pillai et. al | ICRA 2019 | ArXiv | GitHub |
Johnston | Self-supervised Monocular Trained Depth Estimation using Self-attention and Discrete Disparity Volume | Johnston & Carneiro | CVPR 2020 | ArXiv | |
FeatDepth | Feature-metric Loss for Self-supervised Learning of Depth and Egomotion | Shu et. al | ECCV 2020 | ArXiv | GitHub |
CADepth | Channel-Wise Attention-Based Network for Self-Supervised Monocular Depth Estimation | Yan et. al | 3DV 2021 | ArXiv | GitHub |
DiffNet | Self-Supervised Monocular Depth Estimation with Internal Feature Fusion | Zhou et. al | BMVC 2021 | ArXiv | GitHub |
HR-Depth | HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation | Lyu et. al | AAAI 2021 | ArXiv | GitHub |
This project is licenced under the Commons Clause
and GNU GPL
licenses.
For commercial use, please contact the authors.