Look at data_preparation/dataset/globals_dirs.py
and change the folder paths where you would like to store the data.
- Download ScanNet v2 data from HERE. Let DATA_ROOT be the path to folder that contains the downloaded annotations. Under DATA_ROOT there should be a folder scans. Under scans there should be folders with names like scene0001_01. We need
_vh_clean_2.ply
,_vh_clean_2.0.010000.segs.json
,_vh_clean_2.labels.ply
,_vh_clean.aggregation.json
. We provide a helper download script:download_scannet_files.py
. This file downloads only the relevant portion of the scannet dataset we need. You still need to download thedownload-scannet-v2.py
after filling the ScanNet agreement form before using this helper script. Additionally, you might need to do some modifications to thedownload-scannet-v2.py
, for eg. removing theinput("")
that requires a manual keyboard input for each scene.
For ScanNet, execute
python data_preparation/scannet/scannet_preprocessing.py preprocess --data_dir PATH_TO_RAW_SCANS --save_dir SAVE_DATA
Add --scannet200 True
for ScanNet200
Make sure to change SCANNET_DATA_DIR
in odin/config.py
to the `SAVE_DATA/train_validation_database.yaml'
Similarly, change SCANNET200_DATA_DIR
in odin/config.py
to the `SAVE_DATA/train_validation_database.yaml'
We provide preprocessed RGB-D data (~80G) for all scenes. You can downloading it using gdown in the data directory.
gdown --id 1Xq84J9Gl9CVns_4Q0gDBxcPoA7hSf-WY
You can skip this if you just want to use our preprocessed RGB-D data
- First download the .sens files as well by using
--type .sens
argument with the scannet download script. - Execute the following script (make sure to change the data directory paths in the script)
python data_preparation/scannet/preprocess_sens.sh
For ScanNet, execute:
python data_preparation/scannet/scannet2coco.py
For ScanNet200, add --scannet200
to the above command