Codes for Point Cloud Completion via Skeleton-Detail Transformer. IEEE Transactions on Visualization and Computer Graphics (TVCG), 2022. See IEEE PDF.
In this work, we present a coarse-to-fine completion framework, which makes full use of both neighboring and long-distance region cues for point cloud completion. Our network leverages a Skeleton-Detail Transformer, which contains cross-attention and self-attention layers, to fully explore the correlation from local patterns to global shape and utilize it to enhance the overall skeleton. Also, we propose a selective attention mechanism to save memory usage in the attention process without significantly affecting performance.
- Python3
- CUDA
- pytorch
- open3d-python
This code is built using Pytorch 1.7.1 with CUDA 10.2 and tested on Ubuntu 18.04 with Python 3.6.
The libs are included under /util
, you need to first compile them where there is also a 'Readme.md' in each subfolder.
Download pre-trained models in trained_model
folder from Google Drive and put them on trianed_model
dir.
For PCN:
- Download ShapeNet test data on Google Drive. Put them on
data/pcn
folder. We use the same testing data in PCN project but we useh5
format. - Run
sh test.sh
. You should first modify themodel_path
to the folder containing your pre-trained model, anddata_path
to the testing files.
For Completion3D:
- Download the test data on Google Drive or Completion3D. Put them on
data/completion3d
folder. - Run
test_benchmark.sh
to generate the 'submission.zip' file for Compleiont3D dataset.
For PCN
- The training data are from PCN repository, you can download training (
train.lmdb
,train.lmdb-lock
) and validation (valid.lmdb
,valid.lmdb-lock
) data fromshapenet
directory on the provided training set link in PCN repository. - Run
python create_pcn_h5.py
to generate the training and validation files with.h5
format. - Run
sh run.sh
for training.
For Compleiont3D:
You can directly download the tranining files from Compleiont3D benchmark. Run sh run.sh
and set dataset
to Completion3D
.
Our codes are partly from ECG, VRCNET. We sincerely thank for their contribution.