[BVM 2025] A Unified Framework for Foreground and Anonymization Area Segmentation in CT and MRI Data
Welcome to the repository for the paper "A Unified Framework for Foreground and Anonymization Area Segmentation in CT and MRI Data"!
Read the paper:
Authors:
Michal Nohel, Constantin Ulrich, Jonathan Suprijadi, Tassilo Wald and Klaus H. Maier-Hein
Author Affiliations:
Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg
Faculty of Mathematics and Computer Science, Heidelberg University
Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
University Hospital Ostrava, Department of Deputy director for science, research and education
This repository contains the code and pre-trained models accompanying the paper submitted to BVM 2025, titled "A Unified Framework for Foreground and Anonymization Area Segmentation in CT and MRI Data". The toolkit addresses key challenges in self-supervised learning (SSL) for 3D medical imaging, focusing on data privacy and computational efficiency. Our models are built on the nnUNet framework, leveraging its adaptability and state-of-the-art performance in medical image segmentation.
- Anatomical Foreground Segmentation Network: Network for identifying relevant regions in CT and MRI scans.
- Deface Area Segmentation Network: Network for identifying anonymized areas in CT and MRI scans.
Check out the official nnUNet installation instructions
nnU-Net needs to know where you intend to save raw data, preprocessed data and trained models. For this you need to set a few environment variables. Please follow the instructions here
The pre-trained models used in this study are publicly available and can be downloaded from Zenodo. Here is the ZIP file BVM2025_Networks, which must be unzipped and contains the trained models. These models include both the Anatomical Foreground Segmentation Network and the Deface Area Segmentation Network, trained on CT and MRI datasets using the nnU-Net framework.
For inference you can use the default nnUNet inference functionalities.
For Anatomical Foreground Segmentation (Dataset803_anatomical_foreground_v2), you can run the model using the command below. This executes the single model trained on the entire dataset:
nnUNetv2_predict_from_modelfolder -i INPUT_FOLDER -o OUTPUT_FOLDER -m MODEL_FOLDER -f all
The Deface Area Segmentation network (Dataset804_SEG_defaced_areas_all_v2) was trained using five-fold cross-validation. To perform inference using a five-fold ensemble, use the following command:
nnUNetv2_predict_from_modelfolder -i INPUT_FOLDER -o OUTPUT_FOLDER -m MODEL_FOLDER
for the five fold ensemble.
Creative Commons Attribution Non Commercial 4.0 International
This license was chosen based on the licenses of the databases used for training the network, where the used databases are:
- the Left Atrial Segmentation Challenge (LACS) dataset (https://ieeexplore.ieee.org/document/7029623)
- the Automatic Cardiac Diagnosis Challenge (ACDC) dataset (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8360453)
- the AMOS (A Large-Scale Abdominal Multi-Organ Benchmark) dataset (https://arxiv.org/abs/2206.08023)
- the TotalSegmentator dataset (https://pubs.rsna.org/doi/10.1148/ryai.230024)
- the Han-Seg dataset (https://www.sciencedirect.com/science/article/pii/S0167814024006807?via%3Dihub)
- the OASIS3 dataset (https://www.medrxiv.org/content/10.1101/2019.12.13.19014902v1)
- OpenNeuro datasets (https://openneuro.org/datasets/ds000113/versions/1.3.0, https://openneuro.org/datasets/ds000233/versions/1.0.1, https://openneuro.org/datasets/ds004169/versions/1.0.7, https://openneuro.org/datasets/ds004192/versions/1.0.7 )
If you use this code in your research, please cite our paper: Nohel, Michal, et al. "A Unified Framework for Foreground and Anonymization Area Segmentation in CT and MRI Data." arXiv preprint arXiv:2501.04361 (2025).
@misc{nohel2025unifiedframeworkforegroundanonymization,
title={A Unified Framework for Foreground and Anonymization Area Segmentation in CT and MRI Data},
author={Michal Nohel and Constantin Ulrich and Jonathan Suprijadi and Tassilo Wald and Klaus Maier-Hein},
year={2025},
eprint={2501.04361},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2501.04361},
}