Created by Zengyi Qin, Jinglu Wang and Yan Lu. The repository contains an implementation of this ACM MM 2020 Paper. Readers are strongly recommended to create and enter a virtual environment with Python 3.6 before running the code.
Clone this repository:
git clone https://github.com/Zengyi-Qin/Weakly-Supervised-3D-Object-Detection.git
Enter the main folder and run installation:
pip install -r requirements.txt
Download the demo data to the main folder and run unzip vs3d_demo.zip
. Readers can try out the quick demo with Jupyter Notebook:
cd core
jupyter notebook demo.ipynb
Download the Kitti Object Detection Dataset (image, calib and label) and place them into data/kitti
. Download the ground planes and front-view XYZ maps from here and run unzip vs3d_train.zip
. Download the pretrained teacher network from here and run unzip vs3d_pretrained.zip
. The data folder should be in the following structure:
├── data
│ ├── demo
│ └── kitti
│ └── training
│ ├── calib
│ ├── image_2
│ ├── label_2
│ ├── sphere
│ ├── planes
│ └── velodyne
│ ├── train.txt
│ └── val.txt
│ └── pretrained
│ ├── student
│ └── teacher
The sphere
folder contains the front-view XYZ maps converted from velodyne
point clouds using the script in ./preprocess/sphere_map.py
. After data preparation, readers can train VS3D from scratch by running:
cd core
python main.py --mode train --gpu GPU_ID
The models are saved in ./core/runs/weights
during training. Reader can refer to ./core/main.py
for other options in training.
Readers can run the inference on KITTI validation set by running:
cd core
python main.py --mode evaluate --gpu GPU_ID --student_model SAVED_MODEL
Readers can also directly use the pretrained model for inference by passing --student_model ../data/pretrained/student/model_lidar_158000
. Predicted 3D bounding boxes are saved in ./output/bbox
in KITTI format.
@article{qin2020vs3d,
title={Weakly Supervised 3D Object Detection from Point Clouds},
author={Zengyi Qin and Jinglu Wang and Yan Lu},
journal={ACM Multimedia},
year={2020}
}