This is the official implementation of Go with the Flows: Mixtures of Normalizing Flows for Point Cloud Generation and Reconstruction.
This repository is based on the official implementation of Discrete Point Flow.
We provide all necessary requirements in form of a environment.yml
.
For our evaluation we rely on the efficient implementation of the EMD metric provided by PointFlow.
To this end, we refer to the installation instructions provided there.
Alternatively, the precompiled code can be downloaded here,
which needs to be unzipped and placed in lib/metrics/
and is expected to work with the provided environment.yml
.
We train our models on ShapeNet. Specifically, we use ShapeNetCore55 in our experiments on generative modeling and autoencoding, and ShapeNetAll13 in the ones on single-view reconstructions. After downloading, the datasets can be preprocessed by running:
python preprocess_ShapeNetCore.py data_dir save_dir
resp.
python preprocess_ShapeNetAll.py shapenetcore.v1_data_dir shapenetall13_data_dir save_dir
Subsequently, for ShapeNetCore55 train/val/test
splits are created using:
python resample_ShapeNetCore.py data_path
Since the preprocessing takes up to a week, we provide the preprocessed datasets:
- Preprocessed ShapeNetCore55
- Preprocessed ShapeNetAll13 meshes and images.
All pretrained models including the corresponding config files can be downloaded here.
To use the models during evaluation, specify your path to the preprocessed data path2data
in the configs of the pretrained models.
All training configurations can be found in configs/
. Prior to training/evaluation remember to set
path2data
in the resp. config file accordingly. Note, path2save
specifies the logging directory and defaults to ./results
.
A generative model can be trained on airplanes/cars/chairs by running the corresponding subsequent command:
bash ./scripts/train_airplane_gen.sh
bash ./scripts/train_car_gen.sh
bash ./scripts/train_chair_gen.sh
An Autoencoder on the entire ShapeNet dataset can be trained using:
bash ./scripts/train_all_ae.sh
To train our model on single-view reconstruction, run:
bash ./scripts/train_all_svr.sh
Generative models can be evaluated by running:
bash ./scripts/run_evaluate_gen.sh
Autoencoding can be evaluated by running:
bash ./scripts/run_evaluate_ae.sh
Single-view reconstruction can be evaluated by running:
bash ./scripts/run_evaluate_svr.sh
For visualization with Mitsuba Renderer, we need to first install and compile Mistsuba 2.0 following the official documentation. Note, mitsuba needs to be sourced before using it every time. Then run evaluate_ae.py
with flag --save
to generate the .h5
file consisting
of ground-thruth point clouds and corresponding generated point clouds. Subsequently, point clouds can be rendered by running:
bash ./scripts/render.sh
where path_h5
denotes the path of .h5
file, path_png
denotes the path to save png files, path_mitsuba
represents the path where mitsuba can be used, name_png
represents the name to save png files, indices
can be put with all indexes of samples that you want to render. All these strings need to be specified.
@article{postels2021go,
title={Go with the Flows: Mixtures of Normalizing Flows for Point Cloud Generation and Reconstruction},
author={Postels, Janis and Liu, Mengya and Spezialetti, Riccardo and Van Gool, Luc and Tombari, Federico},
journal={International Conference on 3D Vision},
year={2021}
}