Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive mask question #600

Closed
cjl2007 opened this issue Sep 8, 2020 · 4 comments
Closed

Adaptive mask question #600

cjl2007 opened this issue Sep 8, 2020 · 4 comments
Labels
question issues detailing questions about the project or its direction user-feedback User feedback issues. Not quite bugs. Not quite feature requests.

Comments

@cjl2007
Copy link

cjl2007 commented Sep 8, 2020

Hi - I have a few questions re: the adaptive mask routine ( in part related to issue #543 ). I'll start with a few observations.

In my data, a large number of voxels are often flagged as "bad" (less than 3 "good" echoes in the adaptive mask) and excluded entirely from the optimally-combined / denoised time-series. See pictures below for an example adaptive mask and OC time-series where this is the case. It may be hard to appreciate without knowing what the original data looked like, but you'll have to take my word that a large number of OFC/temporal voxels have been removed.

adaptive_mask

ts_OC

This isn't desirable behavior from my perspective, so I have been calculating a T2* map outside of tedana and replaced the adaptive mask with a mask that is based on the R-squared values associated with those fits (as suggested in #543). See example T2* map and R-squared map below for the same scan as above. You can see that "good" (R2 > ~0.8) fits are obtained in many voxels flagged as "bad" in the adaptive mask shown earlier.

t2s

r2

So, my specific questions are...

  1. Have others noticed this behavior (many voxels excluded) in their data by the adaptive mask? Do we think that this behavior is "okay"? To be clear, I understand why we want to avoid calculating fits / metrics with noisy voxels, but I am not convinced that removing the data entirely is better.

  2. More generally, is it worth considering alternatives to the adaptive mask, such as R-squared as suggested in Supplement or replace adaptive mask with R-squared #543. In my hands anyways, this seems to be a less aggressive way of limiting the contribution of noisy voxels to things.

  3. If not, it might be worth allowing users to specify their mask manually. I know that there is an option to provide an explicit mask, but ultimately this mask will be adjusted by tedana on the basis of the same adaptive mask that would have been used had the user not specified an explicit mask, right?

Thanks and looking forward to hearing your thoughts on this topic,

@tsalo
Copy link
Member

tsalo commented Sep 8, 2020

  1. Have others noticed this behavior (many voxels excluded) in their data by the adaptive mask? Do we think that this behavior is "okay"? To be clear, I understand why we want to avoid calculating fits / metrics with noisy voxels, but I am not convinced that removing the data entirely is better.

In my own data, I've noticed the opposite issue- namely, that the adaptive mask does not identify many voxels as having bad signal in any echoes. Regardless of the specific pattern, I think the step is necessary, but the actual implementation has a lot of arbitrary choices. I can't say for sure if the voxels should be completely removed when their signal has bottomed out at later echoes, but it does make sense to me to do so.

  1. More generally, is it worth considering alternatives to the adaptive mask, such as R-squared as suggested in Supplement or replace adaptive mask with R-squared #543. In my hands anyways, this seems to be a less aggressive way of limiting the contribution of noisy voxels to things.

I do want to use R2, but I hit a wall in actually implementing it (see #548). That said, I think we will want to supplement the adaptive masking procedure with R2, rather than replace it. We still need a solid way to identify later echoes without signal for voxels with very short T2*, and I'm not sure that R2 gets us that. Someone else can correct me on this though.

  1. If not, it might be worth allowing users to specify their mask manually. I know that there is an option to provide an explicit mask, but ultimately this mask will be adjusted by tedana on the basis of the same adaptive mask that would have been used had the user not specified an explicit mask, right?

That seems like a good idea. The way the adaptive mask interacts with explicit masks has been a source of confusion in the past. You are correct that the adaptive masking procedure is used in the same manner with an explicit mask as with no provided mask, although I think the thresholds would change based on the available data. I'd have to look through the code again to make sure that the adaptive masking isn't done on the unmasked data and then just intersected with the mask.

@tsalo tsalo added the question issues detailing questions about the project or its direction label Sep 8, 2020
@cjl2007
Copy link
Author

cjl2007 commented Sep 8, 2020

Thanks for the thoughtful response, Taylor.

It is interesting that, in your hands, the adaptive mask does not flag as many voxels as bad. I am trying to think why that would be. One reason would be the characteristics of our datasets. For example, maybe the individual echoes in my scans are more noisy in general due to my use of multi-band / in-plane acceleration? I don't fully understand how voxels are being binned as good/bad across echoes, so its hard for me to think of what might be going on here.

Anyways - for what its worth, I wanted to share my experience running tedana with and without the adaptive mask routine. In my hands, regardless of whether I run tedana normally or if I run it without the adaptive mask (i.e., if I label all voxels in my explicit mask as "good"), I get nearly identical results ... the biggest difference is whether huge chunks of OFC / inferior temporal cortex are missing in the various output files.

After all, one of the really attractive aspects of multi-echo fMRI is recovering signals from these kinds of short T2* brain regions, so masking out data from these regions and then artificially creating signals via some kind of spatial smoothing or dilation later on is not ideal!

Thanks again
Chuck

@mvdoc
Copy link
Contributor

mvdoc commented Oct 19, 2020

I think I got the same issue with the adaptive mask testing out sequences with ipat/mb and 3 echoes and using fMRIPrep. The mask is too aggressive, and cuts out huge portions of ATL and OFC. Curiously, when I used t2smap 0.8.0 (from pip), the mask was not too aggressive, but I haven't checked what changed between 0.8.0 and the latest release.

As a comparison, this is a slice after running t2smap from tedana 0.8.0

image

and this is with the latest tedana

image

@tsalo
Copy link
Member

tsalo commented Oct 20, 2020

@mvdoc, in 0.0.8 (I believe), we started using our adaptive mask (a mask in which each voxel's value reflects the number of echoes with "good" signal) across the full workflow (please see #358). As a result, the optimal combination procedure now only uses "good" echoes for each voxel, and any voxels without good signal in >=3 echoes are masked out.

That's how the masking of the combined data has changed, but we can discuss why as well. Both you and @cjl2007 have expressed concerns about this new approach, so perhaps we should discuss why we chose to do this.

The goal of the adaptive masking procedure is to identify when later echoes in voxels with very short T2* do not have good signal. In the figure below, I have simulated some data with an S0 of 10000 and a T2* of 5. In this case, there are five echoes. The first one or two echoes have values above zero, and, in this simulated example, actually contain signal that can be used for optimal combination and multi-echo denoising. The last three echoes have "bottomed out". Since tedana requires three echoes to do its work, there are just not enough echoes at short enough echo times to use this voxel.

image

That said, our biggest caveat is that the adaptive masking procedure we use to determine which echoes are "good" for each voxel is definitely far from perfect. We have plans to use other methods (e.g., R^2), but at the moment this is what we have working.

When I first started contributing to tedana, @rmarkello walked me through this in #102. At the time, tedana leaned heavily toward using this noisy data in some places in the workflow, but not others. The idea of using signal quality in some parts of the pipeline and sort of ignoring it in others was, in my opinion, inconsistent, so we ultimately chose to use the adaptive mask throughout the whole pipeline. The unfortunate side effect is that coverage is reduced in the optimally combined data.

I hope that helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question issues detailing questions about the project or its direction user-feedback User feedback issues. Not quite bugs. Not quite feature requests.
Projects
None yet
Development

No branches or pull requests

3 participants