Skip to content

Commit

Permalink
Merge pull request #17 from BrainLesion/feature/inferrer-class
Browse files Browse the repository at this point in the history
Feature/inferrer class
  • Loading branch information
MarcelRosier authored Jan 17, 2024
2 parents 2e2e57d + 15bbff6 commit 694d7c8
Show file tree
Hide file tree
Showing 33 changed files with 1,455 additions and 768 deletions.
39 changes: 39 additions & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: tests

on:
push:
branches: ["main"]
pull_request:
branches: ["main"]

jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10"] #TODO add 3.11 support (for 3.12 torch is not available yet)

steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
pip install -e .
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
4 changes: 1 addition & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ share/python-wheels/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
Expand Down Expand Up @@ -130,5 +129,4 @@ dmypy.json

.vscode
poetry.lock

.DS_Store
.DS_Store
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
[![PyPI version panoptica](https://badge.fury.io/py/brainles-aurora.svg)](https://pypi.python.org/pypi/brainles-aurora/)
[![Documentation Status](https://readthedocs.org/projects/brainles-aurora/badge/?version=latest)](http://brainles-aurora.readthedocs.io/?badge=latest)
[![tests](https://github.com/BrainLesion/AURORA/actions/workflows/tests.yml/badge.svg)](https://github.com/BrainLesion/AURORA/actions/workflows/tests.yml)

# AURORA

Expand Down
107 changes: 53 additions & 54 deletions Tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**1. Download project and install dependencies**\n",
"**1. Download the BrainLes Aurora package**\n",
"\n",
"Download the github project with the following command:"
]
Expand All @@ -28,39 +28,15 @@
"metadata": {},
"outputs": [],
"source": [
"!git clone https://github.com/HelmholtzAI-Consultants-Munich/AURORA"
"!pip install brainles_aurora"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Install all requirements listed in requirements.txt:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!cd AURORA\n",
"!pip install -r requirements.txt"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"**2. Preprocess data**\n",
"\n",
"The provided models need coregistered, skullstripped sequences as input.\n",
"\n",
"We recommend [BraTS-Toolkit](https://github.com/neuronflow/BraTS-Toolkit) for preprocessing, which covers the entire image analysis workflow prior to tumor segmentation.\n",
"\n",
"BraTS-Toolkit can be downloaded with the following command:"
"Install a preprocessing package, e.g. BraTS-Toolkit"
]
},
{
Expand Down Expand Up @@ -92,19 +68,42 @@
"Minimal mode: Segmentation without test-time augmentation with only T1-CE as input."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"__Minimal example__\n",
"\n",
"Logging will be messed up when used from a juypter notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"BasicUNet features: (32, 32, 64, 128, 256, 32).\n"
]
}
],
"source": [
"from lib import single_inference\n",
"from brainles_aurora.inferer.inferer import AuroraGPUInferer, AuroraInferer\n",
"from brainles_aurora.inferer.dataclasses import AuroraInfererConfig\n",
"\n",
"single_inference(\n",
" t1c_file=\"Examples/BraTS-MET-00110-000-t1c.nii.gz\",\n",
" segmentation_file=\"your_segmentation_file.nii.gz\",\n",
" tta=False, # optional: whether to use test time augmentations\n",
" verbosity=True, # optional: verbosity of the output\n",
"config = AuroraInfererConfig(\n",
" tta=False\n",
") # disable tta for faster inference in this showcase\n",
"\n",
"# If you don-t have a GPU that supports CUDA use the CPU version: AuroraInferer(config=config)\n",
"inferer = AuroraGPUInferer(config=config)\n",
"\n",
"inferer.infer(\n",
" t1=\"example_data/BraTS-MET-00110-000-t1c.nii.gz\",\n",
" segmentation_file=\"test_output/segmentation.nii.gz\",\n",
")"
]
},
Expand Down Expand Up @@ -132,25 +131,25 @@
"metadata": {},
"outputs": [],
"source": [
"from lib import single_inference\n",
"# from lib import single_inference\n",
"\n",
"single_inference(\n",
" t1_file=\"Examples/BraTS-MET-00110-000-t1n.nii.gz\",\n",
" t1c_file=\"Examples/BraTS-MET-00110-000-t1c.nii.gz\",\n",
" t2_file=\"Examples/BraTS-MET-00110-000-t2w.nii.gz\",\n",
" fla_file=\"Examples/BraTS-MET-00110-000-t2f.nii.gz\",\n",
" segmentation_file=\"Examples/your_segmentation_file.nii.gz\",\n",
" whole_network_outputs_file=\"Examples/your_whole_lesion_file.nii.gz\", # optional: whether to save network outputs for the whole lesion (metastasis + edema)\n",
" metastasis_network_outputs_file=\"Examples/your_metastasis_file.nii.gz\", # optional: whether to save network outputs for the metastasis\n",
" cuda_devices=\"0\", # optional: which CUDA devices to use\n",
" tta=True, # optional: whether to use test time augmentations\n",
" sliding_window_batch_size=1, # optional: adjust to fit your GPU memory, each step requires an additional 2 GB of VRAM, increasing is not recommended for single interference\n",
" workers=8, # optional: workers for the data laoder\n",
" threshold=0.5, # optional: where to threshold the network outputs\n",
" sliding_window_overlap=0.5, # optional: overlap for the sliding window\n",
" model_selection=\"best\", # optional: choose best or last checkpoint, best is recommended\n",
" verbosity=True, # optional: verbosity of the output\n",
")"
"# single_inference(\n",
"# t1_file=\"Examples/BraTS-MET-00110-000-t1n.nii.gz\",\n",
"# t1c_file=\"Examples/BraTS-MET-00110-000-t1c.nii.gz\",\n",
"# t2_file=\"Examples/BraTS-MET-00110-000-t2w.nii.gz\",\n",
"# fla_file=\"Examples/BraTS-MET-00110-000-t2f.nii.gz\",\n",
"# segmentation_file=\"Examples/your_segmentation_file.nii.gz\",\n",
"# whole_network_outputs_file=\"Examples/your_whole_lesion_file.nii.gz\", # optional: whether to save network outputs for the whole lesion (metastasis + edema)\n",
"# metastasis_network_outputs_file=\"Examples/your_metastasis_file.nii.gz\", # optional: whether to save network outputs for the metastasis\n",
"# cuda_devices=\"0\", # optional: which CUDA devices to use\n",
"# tta=True, # optional: whether to use test time augmentations\n",
"# sliding_window_batch_size=1, # optional: adjust to fit your GPU memory, each step requires an additional 2 GB of VRAM, increasing is not recommended for single interference\n",
"# workers=8, # optional: workers for the data laoder\n",
"# threshold=0.5, # optional: where to threshold the network outputs\n",
"# sliding_window_overlap=0.5, # optional: overlap for the sliding window\n",
"# model_selection=\"best\", # optional: choose best or last checkpoint, best is recommended\n",
"# verbosity=True, # optional: verbosity of the output\n",
"# )"
]
}
],
Expand All @@ -170,7 +169,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
"version": "3.10.13"
},
"orig_nbformat": 4
},
Expand Down
22 changes: 21 additions & 1 deletion brainles_aurora/aux.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from path import Path
from pathlib import Path
import os
from typing import IO


def turbo_path(the_path):
Expand All @@ -11,3 +12,22 @@ def turbo_path(the_path):
)
)
return turbo_path


class DualStdErrOutput:
def __init__(self, stderr: IO, file_handler_stream: IO = None):
self.stderr = stderr
self.file_handler_stream = file_handler_stream

def set_file_handler_stream(self, file_handler_stream: IO):
self.file_handler_stream = file_handler_stream

def write(self, text):
self.stderr.write(text)
if self.file_handler_stream:
self.file_handler_stream.write(text)

def flush(self):
self.stderr.flush()
if self.file_handler_stream:
self.file_handler_stream.flush()
6 changes: 3 additions & 3 deletions brainles_aurora/download.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# copied from https://github.com/Nordgaren/Github-Folder-Downloader/blob/master/gitdl.py
import os
from github import Github, Repository, ContentFile
import requests

import shutil as sh

import requests
from github import ContentFile, Github, Repository


def download(c: ContentFile, out: str):
r = requests.get(c.download_url)
Expand Down
Empty file.
80 changes: 80 additions & 0 deletions brainles_aurora/inferer/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
from enum import Enum


class InferenceMode(str, Enum):
"""Enum representing different modes of inference based on available image inputs
Enum Values:
T1_T1C_T2_FLA (str): All four modalities are available.
T1_T1C_FLA (str): T1, T1C, and FLAIR are available.
T1_T1C (str): T1 and T1C are available.
T1C_FLA (str): T1C and FLAIR are available.
T1C_O (str): T1C is available.
FLA_O (str): FLAIR is available.
T1_O (str): T1 is available.
"""

T1_T1C_T2_FLA = "t1-t1c-t2-fla"
T1_T1C_FLA = "t1-t1c-fla"
T1_T1C = "t1-t1c"
T1C_FLA = "t1c-fla"
T1C_O = "t1c-o"
FLA_O = "fla-o"
T1_O = "t1-o"


class ModelSelection(str, Enum):
"""Enum representing different strategies for model selection.
Enum Values:
BEST (str): Select the best performing model.
LAST (str): Select the last model.
VANILLA (str): Select the vanilla model.
"""

BEST = "best"
LAST = "last"
VANILLA = "vanilla"


class DataMode(str, Enum):
"""Enum representing different modes for handling input and output data.
Enum Values:
NIFTI_FILE (str): Input data is provided as NIFTI file paths/ output is writte to NIFTI files.
NUMPY (str): Input data is provided as NumPy arrays/ output is returned as NumPy arrays.
"""

NIFTI_FILE = "NIFTI_FILEPATH"
NUMPY = "NP_NDARRAY"


class Output(str, Enum):
"""Enum representing different types of output.
Enum Values:
SEGMENTATION (str): Segmentation mask.
WHOLE_NETWORK (str): Whole network output.
METASTASIS_NETWORK (str): Metastasis network output.
"""

SEGMENTATION = "segmentation"
WHOLE_NETWORK = "whole_network"
METASTASIS_NETWORK = "metastasis_network"


MODALITIES = ["t1", "t1c", "t2", "fla"]
"""List of modality names in standard order: T1 T1C T2 FLAIR (['t1', 't1c', 't2', 'fla'])"""


# booleans indicate presence of files in order: T1 T1C T2 FLAIR
IMGS_TO_MODE_DICT = {
(True, True, True, True): InferenceMode.T1_T1C_T2_FLA,
(True, True, False, True): InferenceMode.T1_T1C_FLA,
(True, True, False, False): InferenceMode.T1_T1C,
(False, True, False, True): InferenceMode.T1C_FLA,
(False, True, False, False): InferenceMode.T1C_O,
(False, False, False, True): InferenceMode.FLA_O,
(True, False, False, False): InferenceMode.T1_O,
}
"""Dictionary mapping tuples of booleans representing presence of the modality in order [t1,t1c,t2,fla] to InferenceMode values."""
44 changes: 44 additions & 0 deletions brainles_aurora/inferer/dataclasses.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import logging
from dataclasses import dataclass
from typing import Tuple


from brainles_aurora.inferer.constants import DataMode, ModelSelection


@dataclass
class BaseConfig:
"""Base configuration for the Aurora model inferer.
Attributes:
output_mode (DataMode, optional): Output mode for the inference results. Defaults to DataMode.NIFTI_FILE.
log_level (int | str, optional): Logging level. Defaults to logging.INFO.
"""

output_mode: DataMode = DataMode.NIFTI_FILE
log_level: int | str = logging.INFO


@dataclass
class AuroraInfererConfig(BaseConfig):
"""Configuration for the Aurora model inferer.
Attributes:
output_mode (DataMode, optional): Output mode for the inference results. Defaults to DataMode.NIFTI_FILE.
log_level (int | str, optional): Logging level. Defaults to logging.INFO.
tta (bool, optional): Whether to apply test-time augmentations. Defaults to True.
sliding_window_batch_size (int, optional): Batch size for sliding window inference. Defaults to 1.
workers (int, optional): Number of workers for data loading. Defaults to 0.
threshold (float, optional): Threshold for binarizing the model outputs. Defaults to 0.5.
sliding_window_overlap (float, optional): Overlap ratio for sliding window inference. Defaults to 0.5.
crop_size (Tuple[int, int, int], optional): Crop size for sliding window inference. Defaults to (192, 192, 32).
model_selection (ModelSelection, optional): Model selection strategy. Defaults to ModelSelection.BEST.
"""

tta: bool = True
sliding_window_batch_size: int = 1
workers: int = 0
threshold: float = 0.5
sliding_window_overlap: float = 0.5
crop_size: Tuple[int, int, int] = (192, 192, 32)
model_selection: ModelSelection = ModelSelection.BEST
Loading

0 comments on commit 694d7c8

Please sign in to comment.