Skip to content

Self-Point-Flow: Self-Supervised Scene Flow Estimation from Point Clouds with Optimal Transport and Random Walk (CVPR2021)

License

Notifications You must be signed in to change notification settings

L1bra1/Self-Point-Flow

Repository files navigation

Self-Point-Flow

This is the PyTorch code for Self-Point-Flow: Self-Supervised Scene Flow Estimation from Point Clouds with Optimal Transport and Random Walk. The code is created by Ruibo Li (ruibo001@e.ntu.edu.sg).

Prerequisities

  • Python 3.6.13
  • NVIDIA GPU + CUDA CuDNN
  • PyTorch (torch == 1.4.0)
  • tqdm
  • sklearn
  • pptk
  • yaml

Create a conda environment for Self-Point-Flow:

conda create -n Self-Flow python=3.6.13
conda activate Self-Flow
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
pip install tqdm pptk PyYAML sklearn

Compile the furthest point sampling, grouping and gathering operation for PyTorch. We use the operation from this repo.

cd pointnet2
python setup.py install
cd ../

Data preprocess

By default, the datasets are stored in SAVE_PATH.

FlyingThings3D

Download and unzip the "Disparity", "Disparity Occlusions", "Disparity change", "Optical flow", "Flow Occlusions" for DispNet/FlowNet2.0 dataset subsets from the FlyingThings3D website (we used the paths from this file, now they added torrent downloads) . They will be upzipped into the same directory, RAW_DATA_PATH. Then run the following script for 3D reconstruction:

python data_preprocess/process_flyingthings3d_subset.py --raw_data_path RAW_DATA_PATH --save_path SAVE_PATH/FlyingThings3D_subset_processed_35m --only_save_near_pts

Generate surface normals for the training set of FlyingThings3D:

python data_preprocess/process_FT3D_s_train_data.py --data_root SAVE_PATH/FlyingThings3D_subset_processed_35m/train --save_root  SAVE_PATH/FlyingThings3D_subset_processed_35m/train_s_norm

This dataset is denoted FT3Ds in our paper.

KITTI

Download and unzip KITTI Scene Flow Evaluation 2015 to directory RAW_DATA_PATH. Run the following script for 3D reconstruction:

python data_preprocess/process_kitti.py RAW_DATA_PATH SAVE_PATH/KITTI_processed_occ_final

This dataset is denoted KITTIs in our paper.

  • KITTI scene flow data provided by FlowNet3D

Download and unzip data processed by FlowNet3D to directory SAVE_PATH. This dataset is denoted KITTIo in our paper.

  • Unlabeled KITTI raw data

In our paper, we use raw data from KITTI for self-supervised scene flow learning. We release the unlabeled training data here for download. This dataset is denoted KITTIr in our paper.

Evaluation

Set data_root in each configuration file to SAVE_PATH in the data preprocess section.

Trained models

Our trained models can be downloaded from model trained on FT3Ds and model trained on KITTIr.

Testing

  • Model trained on FT3Ds

When evaluating this pre-trained model on FT3Ds testing data, set dataset to FT3D_s_test. And when evaluating this pre-trained model on KITTIs data, set dataset to KITTI_s_test. Then run:

python evaluate.py config_evaluate_FT3D_s.yaml
  • Model trained on KITTIr

Evaluate this pre-trained model on KITTIo:

python evaluate.py config_evaluate_KITTI_o.yaml

Training

Set data_root in each configuration file to SAVE_PATH in the data preprocess section.

  • Train model on FT3Ds with 8192 points as input:
python train_FT3D_s.py config_train_FT3D_s.yaml
  • Train model on KITTIr with 2048 points as input:
python train_KITTI_r.py train_KITTI_r.yaml

Citation

If you find this code useful, please cite our paper:

@inproceedings{li2021self,
  title={Self-point-flow: Self-supervised scene flow estimation from point clouds with optimal transport and random walk},
  author={Li, Ruibo and Lin, Guosheng and Xie, Lihua},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={15577--15586},
  year={2021}
}

Acknowledgement

Our code is based on HPLFlowNet, PointPWC and FLOT. And the flownet3d model is based on Pointnet2.PyTorch, FlowNet3D, and flownet3d_pytorch.

About

Self-Point-Flow: Self-Supervised Scene Flow Estimation from Point Clouds with Optimal Transport and Random Walk (CVPR2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published