In our recent work, we propose a novel self-supervised algorithm called S3-Net for accurate medical image segmentation. The proposed S3-Net incorporates the Inception Large Kernel Attention (I-LKA) modules to enhance the network's ability to capture both contextual information and local intricacies, leading to precise semantic segmentation. The paper addresses the challenge of handling deformations commonly observed in medical images by integrating deformable convolutions into the architecture. This allows the network to effectively capture and delineate lesion deformations, leading to better-defined object boundaries. One key aspect of the proposed method is its emphasis on learning invariance to affine transformations, which are frequently encountered in medical scenarios. By focusing on robustness against geometric distortions, the model becomes more capable of accurately representing and managing such distortions. Moreover, to ensure spatial consistency and encourage the grouping of neighboring image pixels with similar features, the paper introduces a spatial consistency loss term. If this code helps with your research please consider citing our paper.
- July 25, 2023: Paper accepted in ICCV CVAMD 2023
- If you found this paper useful, please consider checking out our previously accepted paper at MIDL 2023 [Paper] [GitHub]
pip install -r requirements.txt
Put your input images in the input_images/image
folder and just simply run the S3Net.ipynb
notebook ;)
If this code helps with your research, please consider citing the following paper:
@inproceedings{karimijafarbigloo2023self,
title={Self-supervised Semantic Segmentation: Consistency over Transformation},
author={Karimijafarbigloo, Sanaz and Azad, Reza and Kazerouni, Amirhossein and Velichko, Yury and Bagci, Ulas and Merhof, Dorit},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2654--2663},
year={2023}
}