Chengyu Wu*, Chengkai Wang*, Yaqi Wang†, Huiyu Zhou, Yatao Zhang, Qifeng Wang†, Shuai Wang†
Our paper has been early accpeted by MICCAI 2024 under (5/6/6) !!! 🥳🥳
- Linux (tested on Ubuntu 16.04, 18.04, 20.04)
- Python 3.6+
- PyTorch 1.6 or higher (tested on PyTorch 1.13.1)
- CUDA 11.3 or higher (tested on CUDA 11.6+torch-geometric 2.2.0)
conda env create -f environment.yml
The training and evaluation code can be overviewed in main.py
. The code of proposed model can be seen in /model
.
Due to existing ethical review and privacy concerns related to the patients from whom the dataset was collected, the authors have no rights to make the dataset publicly available. Currently, it is recommended to use your own multimodal dataset to run the code. The data types and requirements can be set according to /dataloader/Dataset.py
.
Pretrained model will be released!
Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.
- CARD: CARD: Classification and Regression Diffusion Models.
- DiffMIC: DiffMIC: Dual-Guidance Diffusion Network for Medical Image Classification
If you find this repository helpful, please consider citing our paper:
@inproceedings{miccai24mmfusion,
title={MMFusion: Multi-modality Diffusion Model for Lymph Node Metastasis Diagnosis in Esophageal Cancer},
author={Wu, Chengyu and Wang, Chengkai and Zhou, Huiyu and Zhang, Yatao and Wang, Qifeng and Wang, Yaqi and Wang, Shuai},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={469--479},
year={2024},
organization={Springer}
}