This repository provides the original implementation of our MICCAI 2022 paper "Landmark-free Statistical Shape Modeling via Neural Flow Deformations".
If you use our work, please cite:
@inproceedings{10.1007/978-3-031-16434-7_44,
title={Landmark-free Statistical Shape Modeling via Neural Flow Deformations},
author={L{\"u}dke, David and Amiranashvili, Tamaz and Ambellan, Felix and Ezhov, Ivan and Menze, Bjoern H. and Zachow, Stefan},
booktitle={Medical Image Computing and Computer Assisted Intervention -- MICCAI 2022},
pages={453--463},
year={2022},
publisher={Springer Nature Switzerland}
}
This implementation requires PyTorch, PyTorch3d and torchdiffeq as the main installations. Please set-up a new virtual environment and install pytorch (pytorch3d) as well as the additional requirements from requirements.txt:
# Install pytorch
pip3 install torch==1.11.0+cu<cuda-version> torchvision==0.12.0+cu<cuda-version> torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu<cuda-version>
# Install pytorch3D
pip3 install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py<python-version>_cu<cuda-version>_pyt<pytorch-version>/download.html
# Install other dependencies
pip3 install -r setup/requirements.txt
Note: The code has been run and tested on linux with python 3.8 and Cuda 11.3. To install the right PyTorch version with cuda might require a different installation. Especially PyTorch3d can be tricky to install, depending on the OS, pytorch and cuda version help to be found here.
Shapes are to be in .ply format and organised into train, val and test directories, with a mean.ply template. For a reconstruction experiment shapes can be generated with make_reconstruction_data.py. Classification labels are to be presented in a dictionary with the case id's as keys and labels as values.
Learning hub representations of the training shapes via shapeflows hub and spokes mehtod is configured and run via a shell script:
sh generate_template.sh
The training is configured and run via a shell script:
sh train.sh
Fitting the model to unseen data (test, validation) and sampling shapes for the specificity experiment is configured and run via a shell script:
sh eval.sh
Computing the surface distances and number of self-intersections for generality and specificity data is configured and run via a shell script:
sh compute_distances.sh
Fitting the model to partial and sparse data and generating reconstructions is configured and run via a shell script:
sh reconstruction.sh
The classification functions can be found here.
Tuning hyperparameter based on a pretrained model is configured and run via a shell script:
sh tune.sh
Parts of the code are used from other repositories: