Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MRTRIX DTI conversion #3

Open
arokem opened this issue Dec 7, 2022 · 13 comments
Open

MRTRIX DTI conversion #3

arokem opened this issue Dec 7, 2022 · 13 comments

Comments

@arokem
Copy link
Collaborator

arokem commented Dec 7, 2022

Write Python code that runs the mrtrix CLI to produce DTI outputs on the preprocessed data (which the current code puts in BIDS format) and then organizes it into a BEP16-compliant derivative dataset.

@arokem
Copy link
Collaborator Author

arokem commented Dec 7, 2022

@Lestropie : do you think this is something you could implement?

@arokem
Copy link
Collaborator Author

arokem commented Dec 7, 2022

See also #2

This was referenced Dec 7, 2022
@arokem
Copy link
Collaborator Author

arokem commented Dec 7, 2022

Also note that this requires #1 to be completed, and preferably also #6

@Lestropie
Copy link

Sorry wrote #7 but didn't actually reply here.

Happy to take responsibility for this bit, but we need to decide on an overall structure of what the code should look like across the various conversions; we don't want completely disparate implementations.

@PeerHerholz
Copy link
Owner

mrtrix was added to the package via this commit.

@PeerHerholz
Copy link
Owner

Hi @Lestropie,

as we have the first draft of the DTI conversion for DIPY outputs ready, I wanted to start working on the mrtrix equivalent. However, as I'm rather unfamiliar with mrtrix, I wanted to ask if you could maybe share the common/best workflow for a DTI analysis using mrtrix? We have the following files for a given participant, obtained via QSIprep:

preproc_dwi.nii.gz, preproc_dwi.bval, preproc_dwi.bvec, desc-brain_mask.nii.gz

Based on #10, we would need to convert files to .mif for the processing and back to .nii afterwards, right?
It would be cool to get your thoughts on this.

@Lestropie
Copy link

If you want to keep things simple, the process itself is simple also:

  1. Call dwi2tensor
    No need to convert to .mif, just provide the NIfTI as input and provide the bvec & bval via the -fslgrad option.
    Just use the default fitting mechanism & parameters.
    Can provide the brain mask via -mask option if you wish.
  2. Call tensor2metric
    One command-line option and NIfTI output file path per metric.

The complexity only arises if, for instance, you want changes to the fitting procedure to be reflected in the BIDS Derivatives metadata. Eg. Currently the default tensor fit is IWLS with 2 iterations, but a software user could explicitly request a different procedure at the command-line, or the default behaviour could change between MRtrix3 versions, and ideally the resultant metadata should always be faithful to what was performed.

As a very basic single use case it's pretty easy. Indeed we think it's so simple that we've never generated a documentation page for it (MRtrix3/mrtrix3#2456).

@PeerHerholz
Copy link
Owner

Thx so much @Lestropie!

Following the "standard"/"easy" use case without changing defaults, I implemented a first draft of the respective pipeline here, which generates the following output:

bids_bep16_datasets/HBN/derivatives/mrtrix_dti/
├── dataset_description.json
└── sub-NDARYM277DEA
    └── ses-HBNsiteCBIC
        └── dwi
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-ad_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-bzero_model.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-fs_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-md_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-rd_mdp.nii.gz
            └── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-tensor_model.nii.gz 

Would this be accurate (sorry I have no experience with mrtrix. Also, I noticed the output is different from what we get via the dipy equivalent, which looks like this:

bids_bep16_datasets/HBN/derivatives/dipy_dti/
├── dataset_description.json
└── sub-NDARYM277DEA
    └── ses-HBNsiteCBIC
        └── dwi
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-ad_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-ad_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-evals_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-evals_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-evecs_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-evecs_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-fa_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-fa_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-ga_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-ga_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-md_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-md_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-mode_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-mode_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-rd_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-rd_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-rgb_mdp.json
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-rgb_mdp.nii.gz
            ├── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-tensor_model.json
            └── sub-NDARYM277DEA_ses-HBNsiteCBIC_acq-64dir_space-T1w_param-tensor_model.nii.gz

Leaving the json files aside for now, dipy generates a few more _param-param_mdp files, specifically *param-rgb_mdp.nii.gz, *param-mode_mdp.nii.gz, *param-ga_mdp.nii.gz, *param-evecs_mdp.nii.gz and *param-evals_mdp.nii.gz. How should we address this?

@Lestropie
Copy link

dipy generates a few more _param-param_mdp files, specifically param-rgb_mdp.nii.gz, param-mode_mdp.nii.gz, param-ga_mdp.nii.gz, param-evecs_mdp.nii.gz and param-evals_mdp.nii.gz

  • "RGB":
    • Does not indicate the nature of the information contained within. Presumably it is the principal eigenvector that defines the direction. But theoretically you could:
      • Have all three eigenvectors encoded in 9 volumes in one image.
      • Have the intensity of each voxel modulated by some other parameter.
    • Is not a parameter, but a data representation.
  • I don't think we have mode implemented. It could be easily added if there's an appropriate reference equation. But it's not the end of the world if it's not there. We're not doing an experiment of equivalence between dipy and MRtrix3.
  • "GA" = Generalized Anisotropy? That's a slightly unusual thing to quantify from a tensor.
  • "evecs" is presumably all three eigenvectors, which you can get from tensor2metric -vector -num 1,2,3. Also be wary of whether or not there is modulation of the norms of those vectors.
  • "evals" you can get with tensor2metric -value -num 1,2,3.

@PeerHerholz
Copy link
Owner

Hi @Lestropie,

thx for the reply. All the files and their naming pattern are based on the current state of the BEP and I followed the information/examples noted there.

We're not doing an experiment of equivalence between dipy and MRtrix3.

True that, but as these files were listed in the current state of the BEP, I assumed of them should be there in order to increase compliance with the BEP.

"evecs" is presumably all three eigenvectors, which you can get from tensor2metric -vector -num 1,2,3. Also be wary of whether or not there is modulation of the norms of those vectors.
"evals" you can get with tensor2metric -value -num 1,2,3

Cool thanks for the hint! This is related to #7 in terms of default vs. additional parameters of the respective processing pipeline.

@Lestropie
Copy link

I assumed of them should be there in order to increase compliance with the BEP.

I wouldn't call this "increased compliance". A BEP016 derivative dataset can have just two files yet be 100% compliant. It's more like "increased breadth of demonstration of compliance". Indeed having different exemplar derivatives with more / fewer model-derived parameters may be worthwhile generating in order to communicate this point.

@arokem
Copy link
Collaborator Author

arokem commented Jan 31, 2023

I think that a good way forward here would be to pare down the dipy outputs so that we get exactly the same output dataset for both software pipelines. DIPY does not currently generate a b0 through it's CLI, so I wonder whether we should also pare that down from mrtrix. I can see arguments against that.

@Lestropie
Copy link

DIPY does not currently generate a b0 through it's CLI, so I wonder whether we should also pare that down from mrtrix

That would worry me a little, as it would hide a historically problematic case. Relates to bids-standard/bids-bep016#54, Relates to bids-standard/bids-bep016#64, probably many others that I can't immediately find links to but it relates to the whole MFP / MDP discussion.

  1. If your tensor model fit yields an estimated b=0 signal, then this isn't like eg. FA or MD that are subsequently estimated from the tensor coefficients, it's an intrinsic result of fitting the model to the empirical image data. So it can't be regenerated after the fact. Hence the MFP / MDP distinction.

  2. Where I was discriminating between MFP and MDP, consider the distinction in the former between one case where b=0 is considered part of the model fit alongside the tensor coefficients and another case where it isn't. In the latter case, your model representation is simple: it's just a 6-volume NIfTI and a single JSON. But with the former case, you have a 6-volume tensor NIfTI, a 3D scalar NIfTI, separate JSONs for each (as their data representations are different), but also ideally a JSON that encodes the metadata that's common to them both, ie. that applies to the model, the fitting process etc.. So this is possibly the simplest case that highlights the issues around complex inheritance and the MFP / MDP distinction, and I don't think it can be masked by the 80/20 rule given it's the tensor model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants