Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement Request for fmriprep output #939

Closed
bkraft4257 opened this issue Jan 26, 2018 · 12 comments
Closed

Enhancement Request for fmriprep output #939

bkraft4257 opened this issue Jan 26, 2018 · 12 comments

Comments

@bkraft4257
Copy link

I have been using fmriprep on our data and it is working great. I have been looking very closely at the normalization of our data to MNI space. I am concerned that my normalization could be better but I have very limited experience with how good is good enough. I would like to suggest that in the BIDS example data set that some data sets be added with output from fmriprep. This would allow new users such as myself data not practice on but we would also have the results run by experts.

For example, I have processed the ds0001 data set without distortion correction and without use_syn_sdc. When I combine the first volume of each BOLD MNI output (run-01 only) for each subject into a 4D file and play it the images appear to jump around. The normalization is clearly working but I don't know if this jumping is normal and in acceptable limits.
Here is an animated GIF of the images to illustrate the jumping.

2018-01-26_12-20-40-1

I would appreciate it if you could comment on if this jumping is normal. Also if there are example data sets with fmriprep output available for practicing please let me know.

@chrisgorgo
Copy link
Contributor

This seems to me like effects of susceptibility distortion that to some extent could be fixed by the SyN SDC option. Looking at individual reports of outlier participants would provide more clues (it is possible that a particular step di not work correctly). Further improvements could be (potentially) achieved if direct EPI normalization was implemented (see #620 - if you would like to take a stab at it we would be happy to help).

Please mind that ds000001 is one of the oldest datasets in OpenfMRI and thus might not be representative of input data quality. You can find more examples of processed data (and request processing of specific datasets) on http://openneuro.org.

@oesteban
Copy link
Member

In order to check if this is a problem of normalization, could you make the same video but using the skull-stripped T1 images?

The rationale is that, assuming spatial normalization is not great, since it is done using the T1 image, that should be the image where we check for normalization performance.

If that test would come positive (meaning spatial normalization is good), which I expect so, then we can look into other details. But I agree with @chrisfilo on suspecting that susceptibility distortions are responsible of these "jumps". Actually, if you look at the ventricles, they seem all pretty close one another (except for that one case that looks wider).

Let us know what you find. If you could post a new gif for those cases you selected with the T1w images it'd be awesome.

@bkraft4257
Copy link
Author

Thank you for your comments. Here is a GIF of the T1w normalization. I think it is very good but would like your comments on it.

2018-01-28_12-20-30-1

I am in the process of rerunning the data with SyN SDC option to see if this improves the normalization. I have started to read the suggested paper, which is going to take some time. I am not sure I am up to the task of writing the necessary code for the EPI normalization until I learn more.

The cause of the wider fMRI normalization in Volume 4 is worth some investigation. My concern, however, is not with this particular data set. I processed it as a public example to see the results of fmriprep. I am trying to understand what is considered acceptable so one may proceed with group analyses. The publishing of fMRI data is a great help for testing purposes. For those of us learning it is extremely helpful to compare our results to those achieved by experts. Are there any data sources processed with fmriprep that include the results?

Thanks again for your help.

@chrisgorgo
Copy link
Contributor

Thanks for sharing this - it does indeed look good. It would be interesting to see results of the SyN SDC experiment (however it's also worthwhile checking the HTML reports for the quality of T1w-BOLD coregistration).

As for reference you might wanna check out our preprocessed CNP data paper: https://f1000research.com/articles/6-1262/v2 as well as many OpenNeuro datasets that have been analyzed with FMRIPREP: https://openneuro.org/public/jobs

@oesteban
Copy link
Member

A possibility that would explain the poor spatial normalization of some BOLD runs is a faulty EPI-to-T1 co-registration.

Please let us know if you used the FreeSurfer workflow (i.e. you did not use --no-freesurfer or --no-fs-reconall) and check on the co-registration plot of the reports.

However, I'd be surprised since bbregister works surprisingly well in our experience. What subject is that one that looks obviously wrong in your first illustration?

@bkraft4257
Copy link
Author

I am in the process of running the SyN-SDC experiment. I will share those results when the processing is done and I can assemble a coherent report.

Chris - I have looked at the paper and Figure 2 is very helpful for my understanding of acceptable normalization of the EPI data. I have looked at the OpenNeuro data sets. I haven't been able to find one that includes the output of fmriprep. One could assume that because fmriprep is robust with minimal flags that all I have to do is run fmriprep on the posted data sets and I will get similar results. Unfortunately, when the results don't match my expectations I am assume that I have done something wrong. For example, I thought that the EPI normalization would be more stable as I played the video between subjects. I suspect that my expectations are just wrong. I can't verify this until I can my output of fmriprep (run as a novice) to someone else who has run it with more experience and is happy with the results. This is why I am looking for a data set with the output of fmriprep included. Even just one or two subjects would be very helpful.

I used the freesurfer workflow. Subject 04 is the subject with the wide brain. Here is the report. I posted the html file as a zip so it could be uploaded to GitHub. Just so you know (and you probably already do) The HTML doesn't display correctly in FireFox (even the newest version) and Safari. I have only had success with the HTML displaying the overlays correctly in Chrome. I will submit another issue so this can be tracked separately.

sub-04.html.zip

Thanks again for your help and how responsive you both have been.

@oesteban
Copy link
Member

Ok, I think we can blame it all on the skull-stripping for this case:

fmriprep-badskullstrip-ds001-sub04

It is worth looking deeper into the quality issues of the T1 of that subject that lead to such a bad mask. From there, all processing regarding anatomical information is mostly wrong :(

Thanks for reporting this, I have opened #946 to address this issue.

I'm also seeing that you used FMRIPREP 1.0.0-rc3. Could you try the latest version (1.0.5) which should include some improvements to the brain mask and modifications of the anatomical workflow?

@oesteban
Copy link
Member

Ok scratch that: this subject is already skull-stripped!

That is why it didn't work. Probably worth excluding since this image has gone through some preprocessing (and the original brain mask is actually not very good).

@bkraft4257
Copy link
Author

Well that's a little embarrassing that I didn't see that sooner. The data was downloaded directly from ds001 of the BIDS example data sets without modification. It never occurred to me that the original data would be the source of the problem.

@effigies
Copy link
Member

I believe the consensus last time was that receiving skull-stripped images implies an unknown level of preprocessing (e.g. was it already bias-field corrected?), which makes it difficult to decide on best-practices for further processing. Hence why supporting such images was considered very low priority.

So for OpenFMRI, we've been excluding these subjects, and for user-supplied data, we would recommend reverting to the original, defaced, T1w images to ensure more uniform preprocessing.

Please correct me if I'm mis-remembering, @oesteban. And this might be worth starting a FAQ for, since we've now run across this a couple times.

@oesteban
Copy link
Member

@bkraft4257 not embarrassing at all, even myself having looked at hundreds of reports did not see the data were skull-stripped in your report. Only after posting I saw that. If you don't mind, I'll close this issue.

@effigies I've opened #947, one of the first FAQs will be this one and I'll copy this explanation (since it looks great to me).

@chrisgorgo
Copy link
Contributor

For future reference here's a quick walkthrough on how to find existing FMRIPREP results on public datasets in OpenNeuro
neurovault_find_results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants