Skip to content

Implementation of IROS23 paper - "SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion"

License

Notifications You must be signed in to change notification settings

Jieqianyu/SSC-RS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SSC-RS

SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion

This repository is for SSC-RS introduced in the following paper. [arxiv paper]

Preperation

Prerequisites

Tested with

  • python 3.7.10
  • numpy 1.20.2
  • torch 1.6.0
  • torchvision 0.7.0
  • torch-scatter 2.0.8
  • spconv 1.1

Dataset

Please download the Semantic Scene Completion dataset (v1.1) from the SemanticKITTI website and extract it.

Or you can use voxelizer to generate ground truths of semantic scene completion.

The dataset folder should be organized as follows.

SemanticKITTI
├── dataset
│   ├── sequences
│   │  ├── 00
│   │  │  ├── labels
│   │  │  ├── velodyne
│   │  │  ├── voxels
│   │  │  ├── [OTHER FILES OR FOLDERS]
│   │  ├── 01
│   │  ├── ... ...

Getting Start

Clone the repository:

git clone https://github.com/Jieqianyu/SSC-RS.git

We provide training routine examples in the cfgs folder. Make sure to change the dataset path to your extracted dataset location in such files if you want to use them for training. Additionally, you can change the folder where the performance and states will be stored.

  • config_dict['DATASET']['DATA_ROOT'] should be changed to the root directory of the SemanticKITTI dataset (/.../SemanticKITTI/dataset/sequences)
  • config_dict['OUTPUT']['OUT_ROOT'] should be changed to desired output folder.

Train SSC-RS Net

$ cd <root dir of this repo>
$ python train.py --cfg cfgs/DSC-Base.yaml --dset_root <path/dataset/root>

Validation

Validation passes are done during training routine. Additional pass in the validation set with saved model can be done by using the validate.py file. You need to provide the path to the saved model and the dataset root directory.

$ cd <root dir of this repo>
$ python validate.py --weights </path/to/model.pth> --dset_root <path/dataset/root>

Test

Since SemantiKITTI contains a hidden test set, we provide test routine to save predicted output in same format of SemantiKITTI, which can be compressed and uploaded to the SemanticKITTI Semantic Scene Completion Benchmark.

We recommend to pass compressed data through official checking script provided in the SemanticKITTI Development Kit to avoid any issue.

You can provide which checkpoints you want to use for testing. We used the ones that performed best on the validation set during training. For testing, you can use the following command.

$ cd <root dir of this repo>
$ python test.py --weights </path/to/model.pth> --dset_root <path/dataset/root> --out_path <predictions/output/path>

Pretrained Model

You can download the models with the scores below from this Google drive link,

Model Segmentation Completion
SSC-RS 24.2 59.7

* Results reported to SemanticKITTI: Semantic Scene Completion leaderboard (link).

Acknowledgement

This project is not possible without multiple great opensourced codebases.

About

Implementation of IROS23 paper - "SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages