Skip to content

[Preprint] Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis

License

Notifications You must be signed in to change notification settings

jianqingzheng/def_diff_rec

Repository files navigation

Deformation-Recovery Diffusion Model (DRDM):
Instance Deformation for Image Manipulation and Synthesis

Website arXiv Explore DRDM in Colab

demo1 demo2

Code for paper Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis

This repo provides an implementation of the training and inference pipeline of DRDM based on Pytorch.


Contents


0. Brief Intro

header The research in this paper focuses on solving the problem of multi-organ discontinuous deformation alignment. An innovative quantitative metric, Motion Separability, is proposed in the paper. This metric is designed to measure the ability of deep learning networks to predict organ discontinuous deformations. Based on this metric, a novel network structure skeleton, the Motion-Separable structure, is designed. In addition, we introduce a Motion disentanglement module to help the network distinguish and process complex motion patterns among different organs.

This research proposes a novel diffusion generative model based on deformation diffusion-and-recovery, which is a deformation-centric version of the noising and denoising process. Named DRDM, this model aims to achieve realistic and diverse anatomical changes. The framework includes random deformation diffusion followed by realistic deformation recovery, enabling the generation of diverse deformations for individual images.

The main contributions include:

  • Instance-specific deformation synthesis: This is the first study to explore diverse deformation generation for one specific image without any atlas or another reference image required;
  • Deformation Diffusion model: A novel diffusion model method is proposed based on deformation diffusion and recovery, rather than intensity/score diffusion or latent feature diffusion based on registration framework;
  • Multi-scale random deformation velocity field sampling and integrating: The method of multi-scale random Dense Velocity Field sampling and integrating is proposed to create deformation fields with physically possible distributions randomly for DRDM training;
  • Training from scratch without annotation: The training of DRDM does not require any annotation by humans or an external (registration or optical/scene flow) model/framework;
  • Data augmentation for few-shot learning: The diverse deformation field generated by DRDM is used on both image and pixel-level segmentation, to augment morphological information without changes in anatomical topology. Thus it enables augmented data for few-shot learning tasks;
  • Synthetic training for image registration: The synthetic deformation created by DRDM can be used to train an image registration model without any external annotation;
  • Benefiting downstream tasks: The experimental results show that data augmentation or synthesis by DRDM improves the downstream tasks, including segmentation and registration. The segmentation method and the registration method based on DRDM respectively outperform the previous augmentation method and the previous synthetic training method, which validate the plausibility and the value of the deformation field generated by DRDM.

1. Installation

Clone code from Github repo: https://github.com/jianqingzheng/def_diff_rec.git

git clone https://github.com/jianqingzheng/def_diff_rec.git
cd def_diff_rec/

install packages

OS PyPI pyversions PyTorch Numpy Pyquaternion PyDICOM

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
pip install numpy==1.19.5
pip install pyquaternion==0.9.9
pip install pydicom==2.4.4

Other versions of the packages could also be applicable


2. Usage

2.1. Setup

Directory layout:

[$DOWNLOAD_DIR]/def_diff_rec/ 
├── Config/
|   |   # configure file (.yaml files)
|   └── config_[$data_name].yaml
├── Data/
|   ├── Src_data/[$data_name]/
|   |   |   # processed image data for DRDM training (.nii|.nii.gz files)
|   |   ├── 0001.nii.gz
|   |   └── ...
|   ├── Tgt_data/[$data_name]/
|   |	├── Tr/
|   |   |   |   # image for deformation (.nii|.nii.gz files)
|   |   |   ├── 0001.nii.gz
|   |   |   └── ...
|   |	└── Gt/
|   |       |   # label for deformation (.nii|.nii.gz files)
|   |       ├── 0001.nii.gz
|   |       └── ...
|   └── Aug_data/[$data_name]/
|       |   # augmented data will be export to here (.nii|.nii.gz files)
|    	├── img/
|       |   |   # deformed image (.nii|.nii.gz files)
|       |   ├── 0001.nii.gz
|       |   └── ...
|    	├── msk/
|       |   |   # deformed label (.nii|.nii.gz files)
|       |   ├── 0001.nii.gz
|       |   └── ...
|    	└── ddf/
|           |   # deformation field (.nii|.nii.gz files)
|           ├── 0001.nii.gz
|           └── ...
├── models/
|   └── [$data_name]-[$model_name]/
|       |   # the files of model parameters (.pth files)
|       ├── [$epoch_id]_[$data_name]_[$model_name].pth
|       └── ...
└── ...

Configuration setting:

Argument Example Description
--data_name 'cmr', 'lct' The data folder name
--net_name 'recresacnet' The network name
--ndims 2, 3 The dimension of image
--num_input_chn 1, 3 The channel number of input image
--img_size 256, 128 The size of image
--timesteps 80 The time step number for deformation
--v_scale 4.0e-05 The minimal std scale of deformation velocity field
--batchsize 64, 4 The batch size for training
--ddf_pad_mode 'border', 'zeros' The padding mode for integrating deformation field
--img_pad_mode 'border', 'zeros' The padding mode for resampling image
--resample_mode 'nearest', 'bicubic' The interpolation mode for resampling image
--device 'cuda', 'cpu' The used device
--patients_list [], [1], [1,2] The selected list of subject for augmentation

configuration settings are edited in [$DOWNLOAD_DIR]/def_diff_rec/Config/*.yaml

2.2. Training (~1 month)

  1. Run python DRDM_train.py --config Config/config_$data_name.yaml
  2. Check the saved model in models/$data_name_$model_name/

* Example for training (default):

  1. Run
python DRDM_train.py --config Config/config_lct.yaml
  1. Check the saved model in models/lct_recresacnet/

2.3. Augmentation

  1. Put the data to augment in Data/Tgt_data/$data_name/
  2. Run python DRDM_augment.py --config Config/config_$data_name.yaml
  3. Check the output data in Data/Aug_data/$data_name/

* Example for training (default):

  1. Put the data to augment in Data/Tgt_data/lct/
  2. Run
python DRDM_augment.py --config Config/config_lct.yaml
  1. Check the output data in Data/Aug_data/lct/

3. Demo

A demo can be found in the provided notebook.

Alternatively, it can be easily run via Explore DRDM in Colab.


4. Citing this work

Any publication that discloses findings arising from using this source code or the network model should cite:

@article{zheng2024deformation,
  title={Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis},
  author={Zheng, Jian-Qing and Mo, Yuanhan and Sun, Yang and Li, Jiahua and Wu, Fuping and Wang, Ziyang and Vincent, Tonia and Papie{\.z}, Bart{\l}omiej W},
  journal={arXiv preprint arXiv:2407.07295},
  doi = {https://doi.org/10.48550/arXiv.2407.07295},
  url = {https://doi.org/10.48550/arXiv.2407.07295},
  keywords = {Image Synthesis, Generative Model, Data Augmentation, Segmentation, Registration},
  year={2024}
}