Skip to content

[MICCAI 2024] Official code for the paper "MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation"

Notifications You must be signed in to change notification settings

hananshafi/MedContext

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation [MICCAI'24]

(full code and all the models will be released soon!)

Hanan Gani1, Muzammal Naseer1, Fahad Khan1,2, Salman Khan1,3

1Mohamed Bin Zayed University of AI 2Linkoping University 3Australian National University

paper

Official code for the paper "MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation".


Contents

  1. Updates
  2. Highlights
  3. Main Contributions
  4. Installation
  5. Run MedContext
  6. Results
  7. Citation
  8. Contact
  9. Acknowledgements

Updates

  • [June 18, 2024] Our paper is accepted at MICCAI 2024 (acceptance rate < 31%).
  • [Feb 22, 2024] Code for UNETR is released.

Highlights

Abstract: Volumetric medical segmentation is a critical component of 3D medical image analysis that delineates different semantic regions. Deep neural networks have significantly improved volumetric medical segmentation, but they generally require large-scale annotated data to achieve better performance, which can be expensive and prohibitive to obtain. To address this limitation, existing works typically perform transfer learning or design dedicated pretrainingfinetuning stages to learn representative features. However, the mismatch between the source and target domain can make it challenging to learn optimal representation for volumetric data, while the multi-stage training demands higher compute as well as careful selection of stage-specific design choices. In contrast, we propose a universal training framework called MedContext that is architecture-agnostic and can be incorporated into any existing training framework for 3D medical segmentation. Our approach effectively learns self-supervised contextual cues jointly with the supervised voxel segmentation task without requiring large-scale annotated volumetric medical data or dedicated pretraining-finetuning stages. The proposed approach induces contextual knowledge in the network by learning to reconstruct the missing organ or parts of an organ in the output segmentation space. The effectiveness of MedContext is validated across multiple 3D medical datasets and four state-of-the-art model architectures. Our approach demonstrates consistent gains in segmentation performance across datasets and different architectures even in few-shot data scenarios


Main Contributions

  • We propose a universal training framework called MedContext that is architecture-agnostic and can be incorporated into any existing training frame- work for 3D medical segmentation.
  • Our approach effectively learns self-supervised contextual cues jointly with the supervised voxel segmentation task without requiring large-scale annotated volumetric medical data or dedicated pretraining-finetuning stages. The proposed approach induces contextual knowledge in the network by learning to reconstruct the missing organ or parts of an organ in the output segmentation space.
  • We validate the effectiveness of our approach across multiple 3D medical datasets and state-of-the-art model architectures. Our approach complements existing methods and improves segmentation performance in conventional as well as few-shot data scenarios

Methodology

Installation

# Create conda environment from yaml file
conda env create --name medcontext --file=environment.yml

# Activate the environment
conda activate medcontext

Datasets

BTCV

The BTCV data is from the BTCV challenge dataset.

The dataset contains 13 abdominal organs including 1. Spleen 2. Right Kidney 3. Left Kideny 4. Gallbladder 5. Esophagus 6. Liver 7. Stomach 8. Aorta 9. IVC 10. Portal and Splenic Veins 11. Pancreas 12. Right adrenal gland 13. Left adrenal gland.

In this paper, we utilize 8 organs. Refer to our paper for further details.

Task: Segmentation

Modality: CT

Size: 30 3D volumes (18 Training + 12 Testing)

We provide the json file that is used to train our models under ./datasets folder here.

Please refer to Setting up the datasets on nnFormer repository for more details. Alternatively, you can download the preprocessed dataset for Synapse here.

The dataset folders for Synapse should be organized as follows:

./UNETR/BTCV/dataset/
  ├── unetr_pp_raw/
      ├── unetr_pp_raw_data/
           ├── Task02_Synapse/
              ├── imagesTr/
              ├── imagesTs/
              ├── labelsTr/
              ├── labelsTs/
              ├── dataset.json
           ├── Task002_Synapse
       ├── unetr_pp_cropped_data/
           ├── Task002_Synapse

Run MedContext

Train UNETR on BTCV:

cd UNETR/BTCV
python main.py --json_list dataset_18_12.json --val_every 100 --batch_size=1 --feature_size=32 --rank 0 --logdir=PATH/TO/OUTPUT/FOLDER --optim_lr=1e-4 --lrschedule=warmup_cosine --infer_overlap=0.5 --save_checkpoint --data_dir=./dataset

Training support for other models and datasets will be released soon

Test UNETR on BTCV:

python test_8.py --infer_overlap=0.5 --json_list dataset_18_12.json --feature_size 32 --data_dir=./dataset --pretrained_model_name student_4000.pt --pretrained_dir='PATH/TO/SAVED/CHECKPOINT' --saved_checkpoint=ckpt

Change the --pretrained_model_name according to your saved checkpoint

Train and Test nnFormer on BTCV

cd nnFormer
DATASET_PATH=./UNETR/BTCV/dataset/

export PYTHONPATH=./
export RESULTS_FOLDER=PATH/TO/SAVE/CHECKPOINTS/
export nnFormer_preprocessed="$DATASET_PATH"/unetr_pp_raw/unetr_pp_raw_data/Task02_Synapse
export nnFormer_raw_data_base="$DATASET_PATH"/unetr_pp_raw
python nnformer/run/run_training.py 3d_fullres nnFormerTrainerV2_nnformer_synapse 2 0 -c
python nnformer/run/run_training.py 3d_fullres nnFormerTrainerV2_nnformer_synapse 2 0 -val --valbest --val_folder VAL_BEST

The argument --valbest ensures that best checkpoint is used for inference. If --valbest is not present, final checkpoint is used for inference.

Contact

Should you have any questions, please contact at hanan.ghani@mbzuai.ac.ae

Citation

If you use our work, please consider citing:

@inproceedings{gani2024medcontext,
  title={MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation},
  author={Gani, Hanan and Naseer, Muzammal and Khan, Fahad and Khan, Salman},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={229--239},
  year={2024},
  organization={Springer}
}

Acknowledgements

Our code is built on the repositories of MONAI. We thank them for their open-source implementation and instructions.

About

[MICCAI 2024] Official code for the paper "MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages