An efficient PyTorch library for Point Cloud Completion.
Project page | Paper | Video
Chulin Xie*, Chuxin Wang*, Bo Zhang, Hao Yang, Dong Chen, and Fang Wen. (*Equal contribution)
We proposed a novel Style-based Point Generator with Adversarial Rendering (SpareNet) for point cloud completion. Firstly, we present the channel-attentive EdgeConv to fully exploit the local structures as well as the global shape in point features. Secondly, we observe that the concatenation manner used by vanilla foldings limits its potential of generating a complex and faithful shape. Enlightened by the success of StyleGAN, we regard the shape feature as style code that modulates the normalization layers during the folding, which considerably enhances its capability. Thirdly, we realize that existing point supervisions, e.g., Chamfer Distance or Earth Mover’s Distance, cannot faithfully reflect the perceptual quality of the reconstructed points. To address this, we propose to project the completed points to depth maps with a differentiable renderer and apply adversarial training to advocate the perceptual realism under different viewpoints. Comprehensive experiments on ShapeNet and KITTI prove the effectiveness of our method, which achieves state-of-the-art quantitative performance while offering superior visual quality.
-
Create a virtual environment via
conda
.conda create -n sparenet python=3.7 conda activate sparenet
-
Install
torch
andtorchvision
.conda install pytorch cudatoolkit=10.1 torchvision -c pytorch
-
Install requirements.
pip install -r requirements.txt
-
Install cuda
sh setup_env.sh
-
Download the processed ShapeNet dataset (16384 points) generated by GRNet, and the KITTI dataset.
-
Update the file path of the datasets in
configs/base_config.py
:__C.DATASETS.shapenet.partial_points_path = "/path/to/datasets/ShapeNetCompletion/%s/partial/%s/%s/%02d.pcd" __C.DATASETS.shapenet.complete_points_path = "/path/to/datasets/ShapeNetCompletion/%s/complete/%s/%s.pcd" __C.DATASETS.kitti.partial_points_path = "/path/to/datasets/KITTI/cars/%s.pcd" __C.DATASETS.kitti.bounding_box_file_path = "/path/to/datasets/KITTI/bboxes/%s.txt" # Dataset Options: ShapeNet, ShapeNetCars, KITTI __C.DATASET.train_dataset = "ShapeNet" __C.DATASET.test_dataset = "ShapeNet"
The pretrained models:
-
MSN for ShapeNet (for 8192 points)
-
run
python test.py --gpu ${GPUS}\ --workdir ${WORK_DIR} \ --model ${network} \ --weights ${path to checkpoint} \ --test_mode ${mode}
-
example
python test.py --gpu 0 --workdir /path/to/logfiles --model sparenet --weights /path/to/checkpoint --test_mode default
All log files in the training process, such as log message, checkpoints, etc, will be saved to the work directory.
-
run
python train.py --gpu ${GPUS}\ --workdir ${WORK_DIR} \ --model ${network} \ --weights ${path to checkpoint}
-
example
python train.py --gpu 0,1,2,3 --workdir /path/to/logfiles --model sparenet --weights /path/to/checkpoint
A fully differentiable point renderer that enables end-to-end rendering from 3D point cloud to 2D depth maps. See the paper for details.
The inputs of renderer are pcd, views and radius, and the outputs of renderer are depth_maps.
- example
# `projection_mode`: a str with value "perspective" or "orthorgonal" # `eyepos_scale`: a float that defines the distance of eyes to (0, 0, 0) # `image_size`: an int defining the output image size renderer = ComputeDepthMaps(projection_mode, eyepos_scale, image_size) # `data`: a tensor with shape [batch_size, num_points, 3] # `view_id`: the index of selected view satisfying 0 <= view_id < 8 # `radius_list`: a list of floats, defining the kernel radius to render each point depthmaps = renderer(data, view_id, radius_list)
-
Run your model and save your results of test dataset
-
Update the file path of the results in
test_fpd.py
and run it:parser.add_argument('--log_dir', default='/path/to/save/logs') parser.add_argument('--data_dir', default='/path/to/test/dataset/pcds') parser.add_argument('--fake_dir', default='/path/to/methods/pcds', help='/path/to/results/shapenet_fc/pcds/')
The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
If you like our work and use the codebase or models for your research, please cite our work as follows.
@InProceedings{Xie_2021_CVPR,
author = {Xie, Chulin and Wang, Chuxin and Zhang, Bo and Yang, Hao and Chen, Dong and Wen, Fang},
title = {Style-Based Point Generator With Adversarial Rendering for Point Cloud Completion},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {4619-4628}
}