-
Fill the google form to get the dataset download link (download Stanford3dDataset_v1.2_Aligned_Version.zip)
-
S3DIS Data Preprocessing
- extract "Stanford3dDataset_v1.2_Aligned_Version.zip".
- Modify dataset path in line 11, 12 in s3dis_data_preparation.py and run it.
- Run the script below to prepare S3DIS dataset.
python3 s3dis_data_preparation.py
After that, the file organization will be like:
S3DIS/
├── Area_1
│ ├── coords
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── labels
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ └── rgb
│ ├── conferenceRoom_1.npy
│ ...
├── Area_2
...
-
Download SemanticKITTI dataset from the official website
- Download "KITTI Odometry Benchmark Velodyne point clouds (80 GB)" & "SemanticKITTI label data (179 MB)"
-
extract the archieve and organize the files as following:
SemanticKitti/
└── sequences
├── 00
│ ├── labels
│ │ ├── 000000.label
│ │ ...
│ └── velodyne
│ ├── 000000.bin
│ ...
├── 01
...
...
In this step, we would like to divide a large-scale point cloud scan into some sub-scene regions as the fundamental label querying units in our ReDAL framework.
- Program Overview
- C++ Program (VCCS algorithm)
- Supported Dataset: S3DIS, SemanticKitti, Scannetv2
Environment: Ubuntu 18.04 (only CPU is needed in this step)
-
Dependency Installation: Point CLoud Library (PCL), Boost C++ Library, CMake, cnpy
-
Build the project via CMake
We wrote a installtaion script to finish the above step.
cd region_division/src
bash install.sh
Example Script
cd region_division/src/build
./supervoxel --dataset s3dis --input-path ~/Desktop/S3DIS_processed \
--voxel-resolution 0.1 --seed-resolution 1 --color-weight 0.5 --spatial-weight 0.5
- Note: voxel-resolution, seed-resolution, color-weight and spatial-weight are hyperparameters in the original VCCS algorithm.
After that, your file orginization will be like:
S3DIS/
├── Area_1
│ ├── coords
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── labels
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── rgb
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ └── supervoxel (new directory)
│ ├── conferenceRoom_1.npy
│ ...
├── Area_2
...
cd region_division/src/build
./supervoxel --dataset s3dis --input-path ~/Desktop/SemanticKITTI/sequences \
--voxel-resolution 0.5 --seed-resolution 10 --color-weight 0.0 --spatial-weight 1.0
- Note: voxel-resolution, seed-resolution, color-weight and spatial-weight are hyperparameters in the original VCCS algorithm.
After that, your file orginization will be like:
SemanticKitti/
└── sequences
├── 00
│ ├── labels
│ │ ├── 000000.label
│ │ ...
│ ├── supervoxel (new directory)
│ │ ├── 000000.bin
│ │ ...
│ └── velodyne
│ ├── 000000.bin
│ ...
├── 01
...
...
In this step, we would like to calculate color difference (named as color discontinuity in our paper) and surface variation (named as structure complexity in our paper) for each point cloud scan. These point cloud properties provide additional information for us to measure the information score for a region and will be used in our ReDAL framework.
Please run both gen_color_gradient.py
and gen_surface_variation.py
with appropriate arguments.
After that, your file orginization will be like:
S3DIS/
├── Area_1
│ ├── boundary (new directory)
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── colorgrad (new directory)
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── coords
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── labels
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── rgb
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ └── supervoxel
│ ├── conferenceRoom_1.npy
│ ...
├── Area_2
...
Only gen_surface_variation.py
are required to be run.
After that, your file orginization will be like:
SemanticKitti/
└── sequences
├── 00
│ ├── boundary (new directory)
│ │ ├── 000000.npy
│ │ ...
│ ├── labels
│ │ ├── 000000.bin
│ │ ...
│ ├── supervoxel
│ │ ├── 000000.bin
│ │ ...
│ └── velodyne
│ ├── 000000.bin
│ ...
├── 01
...
...
Now, you've finished all data preparation steps.