This repository contains the PyTorch implementation of:
Masked Frequency Consistency for Domain-Adaptive Semantic Segmentation of Laparoscopic Images, MICCAI 2023. [URL] [PDF]
For this project, we used python 3.7.9. We recommend setting up a new virtual environment:
conda create -n mfc python==3.7.9
conda activate mfc
In that environment, the requirements can be installed with:
pip install -r requirements.txt
As DAFormer,
please download the MiT weights(mit_b5.pth)
pretrained on ImageNet-1K provided by the official
SegFormer repository and put them in a
folder pretrained/
within this project. Only mit_b5.pth is necessary.
A brief instruction on how to set up the Datasets is provided below. More detailed instruction will be provided later.
Simulated Dataset: Dataset_sim.md
I2I Dataset: Dataset_i2i.md
Cholec Datasets: Dataset_cholec.md
The final folder structure should look like this:
MFC
├── ...
├── MFC_DP
├── DATA
│ ├── cholec
│ │ ├── img
│ │ │ ├── train
│ │ │ ├── test
│ │ ├── gt
│ │ │ ├── test
│ ├── simulated
│ │ ├── images
│ │ ├── labels
│ ├── i2i
│ │ ├── images
│ │ ├── labels
├── ...
Data Preprocessing: Finally, please run the following scripts to convert the label IDs to the train IDs and to generate the class index for RCS:
cd MFC_DP
python tools/convert_datasets/cs8k.py ../DATA/cholec
python tools/convert_datasets/i2i.py ../DATA/i2i
python tools/convert_datasets/i2i.py ../DATA/simulated
A training job can be launched using:
python run_experiments.py --config configs/mfc_seg/xxx.py
The logs and checkpoints are stored in work_dirs/
.
A trained model can be evaluated using:
sh test.sh work_dirs/local-segmentation/run_name
The predictions are saved for inspection to
work_dirs/run_name/preds
and the mIoU of the model is printed to the console.
This project is based on mmsegmentation version 0.16.0. For more information about the framework structure and the config system, please refer to the mmsegmentation documentation and the mmcv documentation.
MFC is based on the following open-source projects. We thank their authors for making the source code publicly available.
If you have any questions, please contact Xinkai Zhao.