Next week
- Chase RDS
- Costings (Munich)
- On the supercomputer
- Hyperparamater optimisation and evaluation of all these options
- Plot curves of sensitivity and specificity
- How to save and compare experiments. Make a plot to summarise experiments.
- Loading in histology. K
- Saliency way to evaluate impact of features.
- Architectures, dropout and losses - run for both types of curves. No subsampling of training dataset K
- Cross validating as default - building nested experiments>folds H XX
- Evaluation scripts to summarise results K
- Save out logs H
- Kambam board S
- Title plots. K
- Summarising scripts - Ditch binary curve. See optimisation curves across folds - do first. Aggregate perfomance stats across folds. Consider reevaluating at shared optimal threshold. K Raincloud plots - each evaluation statistic across all folds. Define which experiments you want to compare. Initially whole experiments. Sub experiments. K Per subject plots across experiments coloured lines. And a matrix of overlap between experiments.
Further ideas:
- Montecarlo dropout for uncertainty estimation
- Ensemble models eg with 3 loss functions or different architectures. Need experiment to check if false positives change.
- Prior with lesion map
- Saliency way to evaluate impact of features.
- Applying combat to new site to align data
- How we manage new patients from new sites. Versioning of the dataset and any trained classifier.