This is the official implementation of CPDM, accepted for presentation at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025, to be held in Tucson, Arizona, from February 28 to March 4, 2025.
Authors: Dac Thai Nguyen, Trung Thanh Nguyen, Huu Tien Nguyen, Thanh Trung Nguyen, Huy Hieu Pham, Thanh Hung Nguyen, Thao Nguyen Truong, and Phi Le Nguyen.
For questions about the paper and dataset, email to Professor NGUYEN Phi Le <lenp [at] soict.hust.edu.vn>
We provide a large-scale CT/PET dataset consisting of 2,028,628 paired PET-CT images. Please refer to the folder CTPET_DATASET
to view a sample of the dataset.
This repository contains the official implementation of the proposed CPDM. CPDM employs a Brownian Bridge process-based diffusion model to directly learn the translation from the CT domain to the PET domain, reducing the stochasticity typically encountered in generative models.
cond env create -f environment.yml
conda activate CPDM
The path of paired image dataset should be formatted as:
your_dataset_path/train/A # training reference
your_dataset_path/train/B # training ground truth
your_dataset_path/val/A # validating reference
your_dataset_path/val/B # validating ground truth
your_dataset_path/test/A # testing reference
your_dataset_path/test/B # testing ground truth
Specity your checkpoint path to save model and dataset path in train_segmentation_model.py. Run below command to train model.
python train_segmentation_model.py
Specity your checkpoint path, dataset path and sampling path in test_segmentation_model.py. Run below command for sampling and saving results to your path.
python test_segmentation_model.py
Note that you can modify this code for training, validation or testing sampling.
Modify the configuration file based on our templates in configs/Template-CPDM.yaml. Don't forget to specify your VQGAN checkpoint path, dataset path and corresponding training and validation/testing sampling path of your Segmentation Model.
Note that you need to train your VQGAN (https://github.com/CompVis/taming-transformers) and sample results of Segmentation Model before starting training CPDM.
Specity your shell file based on our templates in configs/Template-shell.sh. Run below command to train or test model.
sh shell/your_shell.sh
This work was supported by Vingroup Joint Stock Company (Vingroup JSC), Vingroup; in part by the Vingroup Innovation Foundation (VINIF) under Project VINIF.2021.DA00128.
Our code is implemented based on Brownian Bridge Diffusion Model (https://github.com/xuekt98/BBDM)
If you find this code useful for your research, please cite the following paper:
@inproceedings{nguyen2025CPDM,
title = {CT to PET Translation: A Large-scale Dataset and Domain-Knowledge-Guided Diffusion Approach},
author = {Nguyen, Dac Thai and Nguyen, Trung Thanh and Nguyen, Huu Tien and Nguyen, Thanh Trung and Pham, Huy Hieu and Nguyen, Thanh Hung and Truong, Thao Nguyen and Nguyen, Phi Le},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2025},
}