The pipeline to manipulate and analyse MRI data from the "Encephalopaty of prematurity" project, in collaboration with KU Leuven.
- This is a data-specific code developed for the encephalophaty of prematurity project.
- Input of the pipeline is the subject scanned in the native Bruker format.
- Output of the pipeline is the subject oriented and segmented provided with subject-wise and comparative data analysis.
- WARNING the code is developed to be
research code
(read: 80% is an uncommented work in progress, not packaged to be a product. See research code definition) - The code is developed in collaboration with University College London (UCL) and Katholieke Universiteit Leuven (KUL).
- Pipeline schema can be found under
notes
folder. - Based on libraries in
requirements.txt
(there are non pip-installable dependencies: follow installation instructions directly from theREADME
library repository) other than niftyReg niftySeg and FSL - Package
main_pipeline
has the structure of the pipeline as in the schematic of the documentation. - Each module of the pipeline can run independently (for debugging and single step analysis) or as part of
the whole pipeline, if called by the
main
. - The code can be used for any dataset (with the required changes) and the authors intend to make the acquired dataset publicly available at the end of the study.
- Connect to the dataset (NAS
Emporioum
if at KCL) and the rabbit dataset folder structure (can be used also on the cluster after updating dataset). - Run
A0_main/subject_parameters_creator.py
to have the latest parameters file (created parameters file connects the subject name with its chart, containing all the information related to the subject - e.g. study, category, orientation...) - Select the parts that you want to run and the subject you want to apply the pipeline in the
A0_main/main_executer.py
- Run
main_executer.py
. - Raise an issue if something goes wrong.
- Update the log of the received files in the root of rabbit analysis in Emporium.
- Follow existing folder structure (see under
notes
) and store the .zip in the appropriate place. - Rename the zip to only the filename (following the existing structure).
- add the chart of the new subject in the file
A0_main/subject_parameters_creator.py
with default values. - start the A0 phase of the pipeline.
- see the converted subject and update the parameters under
A0_main/subject_parameters_creator.py
according to visual intuition. - re run
A0_main/subject_parameters_creator.py
to update parameters. - run the whole pipeline
main_executer.py
for the selected subject.
-
Main work in progress is under
danny_approval
. Danny (see crash testing) is a dummy dataset created with the github repository DummyForMRI. Under danny approval we intend to develop some automatic testing to automatically perform the pipeline on the Danny, and check possible issues. -
Improve parameters files
subject_parameters_creator.py
. Currently there is a single file instantiating an element of the classSubjectParameters
. Re-design with yaml parameter file yaml_parameters_checker method and a yaml_parameters_creator.
To push latest commits on the GIFTSurg repo use
#git remote add origin_gs https://github.com/gift-surg/RabbitBrainAnalysis.git
git push origin_gs master
To push latest commits on the CMIClab platform (until 15-sept 2018) use
git push origin_gs master
To check the remote origin in your local repository type:
git remote -v
- This repository is developed within the GIFT-surg research project.
- This work was supported by Wellcome / Engineering and Physical Sciences Research Council (EPSRC) [WT101957; NS/A000027/1; 203145Z/16/Z].