└── RoboDepth
│── kitti_data
│ │── 2011_09_26
│ │── ...
│ │── kitti_c
│ └── val
│── cityscapes
│ ├── camera
│ │ ├── train
│ │ └── val
│ ├── disparity_trainvaltest
│ │ └── disparity
│ ├── leftImg8bit_trainvaltest
│ │ └── leftImg8bit
│ └── split_file.txt
│── nyu
│ │── basement_0001a
│ │── basement_0001b
│ │── ...
│ │── nyu_c
│ └── split_file.txt
└── ...
You can download the entire raw KITTI dataset by running:
wget -i splits/kitti_archives_to_download.txt -P kitti_data/
Then unzip with:
cd kitti_data/
unzip "*.zip"
cd ..
🎯 This dataset weighs about 175GB, so make sure you have enough space to unzip
too!
The train/test/validation
splits are defined in the splits/
folder.
By default, the code will train a depth estimation model using Zhou's subset of the standard Eigen split of KITTI, which is designed for monocular training.
You can also train a model using the new benchmark split or the odometry split by setting the --split
flag.
The corrupted KITTI test sets under Eigen split can be downloaded from Google Drive with this link.
Alternatively, you can directly download them to the server by running:
cd kitti_data/
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1Ohyh8CN0ZS7gc_9l4cIwX4j97rIRwADa' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1Ohyh8CN0ZS7gc_9l4cIwX4j97rIRwADa" -O kitti_c.zip && rm -rf /tmp/cookies.txt
Then unzip with:
unzip kitti_c.zip
🎯 This dataset weighs about 12GB, make sure you have enough space to unzip
too!
Coming soon.
Coming soon.
You can download the NYU Depth Dataset V2 from Google Drive with this link.
Alternatively, you can directly download it to the server by running:
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1wC-io-14RCIL4XTUrQLk6lBqU2AexLVp' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1wC-io-14RCIL4XTUrQLk6lBqU2AexLVp" -O nyu.zip && rm -rf /tmp/cookies.txt
Then unzip with:
unzip nyu.zip
🎯 This dataset weighs about 6.2GB, which includes 24231 image-depth pairs as the training set and the standard 654 images as the validation set.
Coming soon.