This repository contains the code in Pytorch for the paper:
DensePoint: Learning Densely Contextual Representation for Efficient Point Cloud Processing [arXiv] [CVF]
Yongcheng Liu, Bin Fan, Gaofeng Meng, Jiwen Lu, Shiming Xiang and Chunhong Pan
ICCV 2019
If our paper is helpful for your research, please consider citing:
@inproceedings{liu2019densepoint,
author = {Yongcheng Liu and
Bin Fan and
Gaofeng Meng and
Jiwen Lu and
Shiming Xiang and
Chunhong Pan},
title = {DensePoint: Learning Densely Contextual Representation for Efficient Point Cloud Processing},
booktitle = {IEEE International Conference on Computer Vision (ICCV)},
pages = {5239--5248},
year = {2019}
}
-
Requirement
- Ubuntu 14.04
- Python 3 (recommend Anaconda3)
- Pytorch 0.3.*
- CMake > 2.8
- CUDA 8.0 + cuDNN 5.1
-
Building Kernel
git clone https://github.com/Yochengliu/DensePoint.git cd DensePoint mkdir build && cd build cmake .. && make
-
Dataset
- Shape Classification: download and unzip ModelNet40 (415M). Replace
$data_root$
incfgs/config_cls.yaml
with the dataset parent path.
- Shape Classification: download and unzip ModelNet40 (415M). Replace
-
Shape Classification
sh train_cls.sh
We have trained a 6-layer classification model in cls
folder, whose accuracy is 92.38%.
-
Shape Classification
Voting script: voting_evaluate_cls.py
You can use our model cls/model_cls_L6_iter_36567_acc_0.923825.pth
as the checkpoint in config_cls.yaml
, and after this voting you will get an accuracy of 92.5% if all things go right.
The code is released under MIT License (see LICENSE file for details).
The code is heavily borrowed from Pointnet2_PyTorch.
If you have some ideas or questions about our research to share with us, please contact yongcheng.liu@nlpr.ia.ac.cn