Skip to content

Commit

Permalink
Merge branch 'master' of github.com:jmlipman/MedicDeepLabv3Plus
Browse files Browse the repository at this point in the history
  • Loading branch information
jmlipman committed Apr 15, 2021
2 parents 6c84796 + f2e6528 commit 27b5dcc
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 25 deletions.
40 changes: 21 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
MedicDeepLabv3+
======================

[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4009212.svg)](https://doi.org/10.5281/zenodo.4009212) (Software)

[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4009246.svg)](https://doi.org/10.5281/zenodo.4009246) (Trained models)

Repository of MedicDeepLabv3+

![Architecture](medicdeeplabv3plus.png "MedicDeepLabv3+ Architecture")
Expand Down Expand Up @@ -30,23 +34,19 @@ This implementation of MedicDeepLabv3+ allows combining several models trained s
.
├── eval.py # Generates segmentation masks. It requires a file with the trained parameters of MedicDeepLabv3+ (provided by train.py)
├── train.py # Optimizes MedicDeepLabv3+ and saves its optimized parameters (required by eval.py)
├── lib
│ ├── losses.py # Cross Entropy + Dice Loss functions
│ ├── metric.py # Metrics to quantify segmentations quality i.e. Dice coeff., Hausdorff distance, Compactness
│ ├── utils.py # Other functions.
│ ├── blocks
│ │ ├── BasicBlocks.py # Contains basic operations of the ConvNet
│ │ └── MedicDeepLabv3PlusBlocks.py # Blocks of operations for MedicDeepLabv3+
│ ├── data
│ │ ├── BaseDataset.py # Basic dataset operations
│ │ └── DataWrapper.py # Reads and parses the NIfTI files
│ └── models
│ ├── BaseModel.py # Contains main training and evaluation procedures
│ └── MedicDeepLabv3Plus.py # Pytorch definition of our model
└── trained_models # Trained MedicDeepLabv3+ parameters
├── MedicDeepLabv3Plus-model-300_1
├── MedicDeepLabv3Plus-model-300_2
└── MedicDeepLabv3Plus-model-300_3
└── lib
├── losses.py # Cross Entropy + Dice Loss functions
├── metric.py # Metrics to quantify segmentations quality i.e. Dice coeff., Hausdorff distance, Compactness
├── utils.py # Other functions.
├── blocks
│ ├── BasicBlocks.py # Contains basic operations of the ConvNet
│ └── MedicDeepLabv3PlusBlocks.py # Blocks of operations for MedicDeepLabv3+
├── data
│ ├── BaseDataset.py # Basic dataset operations
│ └── DataWrapper.py # Reads and parses the NIfTI files
└── models
├── BaseModel.py # Contains main training and evaluation procedures
└── MedicDeepLabv3Plus.py # Pytorch definition of our model
```

### 2. Installation and Requirements
Expand All @@ -63,7 +63,7 @@ This implementation of MedicDeepLabv3+ allows combining several models trained s

1. Install dependencies with pip
```cshell
pip install scipy, scikit-image, nibabel
pip install scipy scikit-image nibabel
```

2. Download source code
Expand Down Expand Up @@ -154,6 +154,8 @@ Optionally, if the ground truth is provided in the same folder as the data (as d

Finally, you can choose in which GPU to execute eval.py, as in train.py. Since eval.py does not take too long to execute, unlike train.py, evaluating MedicDeepLabv3+ on the CPU with --gpu -1 is a viable option.

Note: we provide three trained MedicDeepLabv3+ models at DOI:10.5281/zenodo.4009246

```cshell
# Minimum training setup
python eval.py --input DIR --output DIR --model FILE(S)
Expand Down Expand Up @@ -182,4 +184,4 @@ Files generated by eval.py:
[MIT License](LICENSE)

### 5. Contact
Feel free to write an email with questions or feedback about MedicDeepLabv3+ at **juanmiguel.valverde@uef.com**
Feel free to write an email with questions or feedback about MedicDeepLabv3+ at **juanmiguel.valverde@uef.fi**
12 changes: 6 additions & 6 deletions lib/models/BaseModel.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,13 +176,13 @@ def evaluate(self, test_loader, metrics, remove_islands, save_output=True):
if remove_islands:
y_pred_cpu = removeSmallIslands(y_pred_cpu, thr=20)

# If GT was provided
# Predictions (and GT) separate the two hemispheres
# combineLabels will combine these such that it creates
# brainmask and contra-hemisphere ROIs instead of
# two different hemisphere ROIs.
y_pred_cpu = combineLabels(y_pred_cpu)
# If GT was provided it measures the performance
if len(y_true_cpu.shape) > 1:
# Predictions (and GT) separate the two hemispheres
# combineLabels will combine these such that it creates
# brainmask and contra-hemisphere ROIs instead of
# two different hemisphere ROIs.
y_pred_cpu = combineLabels(y_pred_cpu)
y_true_cpu = combineLabels(y_true_cpu)

results[id_] = Measure.all(y_pred_cpu, y_true_cpu)
Expand Down

0 comments on commit 27b5dcc

Please sign in to comment.