Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation
Tensorflow v2 implementation of the SIFA unsupervised cross-modality domain adaptation framework.
Please refer to the branch SIFA-v1 for the original paper and code
Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation
IEEE Transactions on Medical Imaging
!git clone https://github.com/2Falx/SIFA-Tensorflow2.git
!mv {your_data_folder} SIFA-Tensorflow2
%cd SIFA-Tensorflow2/
-
Open and follow the "SIFA_Implementation_Tf2-.ipynb" jupyter notebook modifying the indicated variable and parameters to adapt it to your dataset.
-
Set the home_path in Cell #3 (here it corresponds to the same folder of the above mentioned notebook).
-
Store your data inside a folder "data" in the home_path
-
Store SWI images in /data/SWI and TOF images in /data/TOF (You can find an example folder for a single 3D image in cell #6)
-
Select the desired Spacing in cell #13 to allow images resizing (here SWI Spacing is chosen)
-
Cell #23 contains all the useful statistics for OUR dataset: comment the last line to use the extracted ones from your data
-
Slices are preprocessed in Cell #27 and transformed into Tfrecords in Cell #30 (Here you can modify the "tfrecords_folder" name)
-
Modify the "percentage" parameter in the "split_file" function (Cell #31) to select the desired train/validation split percentage
-
You can use (or modify) the function in Cell #34 to remove previous SIFA Training outputs
-
Cell #37 contains a brief description of all the parameters you can set before the training
-
Start Training by running Cell #38
-
"Print losses" Section contains some Cells in order to create a "tot_losses.csv" file containing the loss-trend for each model loss given the name of an output folder ("date" variable, Cell #41). You can find your file inside the "{date}/losses" path.
-
You can also uncomment the last lines of Cell #42 to remove all the saved model of the selected output to save some space
-
Last Section contains some Cells in order to plot all the Loss-Curves given the complete path of a .csv file ("file_name " variable, Cell #46). You can find the list of all the .csv in your home path by running Cell #43 or #45
- Spacing Extraction
- Slicing and Reshaping
- Standardization using mean and std over the Volume of the slice
- Normalization using min and max over the whole dataset
- Slice Preprocessing:
- Center Crop
- Zero Padding
- Add 1 dim / One-hot-Encoding
- Transformation into Tfrecord
- You can find all the output results in the folder "{home_path}/SIFA/ouput/{date}"
- imgs: Contains all the .jpg images that you can also find in the .html files
- nib_imgs: Same of imgs but contains all the .nii.gz
- losses: Contains the .txt files of all the loss, both individually in each file and all-toghether in tot_losses.csv
html visualization files:
-
1st Row: 6 Data images
-
$X_s$ : Input from the Source Domain -
$X_t$ : Input from the Target Domain -
$X_{s &#8594 t}$ : Transformed Image from Source to Target -
$X_{t &#8594 s}$ : Transformed Image from Target to Source -
$X_{s~}$ : Reconstructed Source Image ($X_{s &#8594 t &#8594 s}$ ) -
$X_{t~}$ : Reconstructed Source Image ($X_{t &#8594 s &#8594 t}$ )
-
-
2nd Row: 7 Segmentation masks
- 'pred_mask_a' : Predicted Mask from
$X_t$ ($Y_t$ ) - 'pred_mask_b' : Same as 'pred_mask_a' 1
- 'pred_mask_b_ll' : Predicted Mask from
$X_{t &#8594 s}$ ($Y_{t &#8594 s}$ ) - 'pred_mask_fake_a' : Predicted Mask from
$X_s$ ($Y_s$ ) - 'pred_mask_fake_b' : Same as 'pred_mask_fake_a' 1
- 'pred_mask_fake_b_ll' : Predicted Mask from
$X_{s &#8594 t}$ ($Y_{s &#8594 t}$ ) - 'gt' : Ground truth,
$X_s$ given Mask ($Y_s$ )
- 'pred_mask_a' : Predicted Mask from
The code is a revisiting version of the original SIFA Implementation Part of the code is revised from the Tensorflow implementation of CycleGAN.
1: Check model.py get_outputs() returns for further details