Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define a minimum number of BOLD components #666

Open
tsalo opened this issue Jan 28, 2021 · 6 comments
Open

Define a minimum number of BOLD components #666

tsalo opened this issue Jan 28, 2021 · 6 comments
Labels
discussion issues that still need to be discussed TE-dependence issues related to TE dependence metrics and component selection

Comments

@tsalo
Copy link
Member

tsalo commented Jan 28, 2021

Summary

In our current code, and reinforced in #663, only one BOLD component is needed to produce valid output. Do we want to increase this number at all?

Next Steps

  1. Discuss
@tsalo tsalo added discussion issues that still need to be discussed TE-dependence issues related to TE dependence metrics and component selection labels Jan 28, 2021
@notZaki
Copy link
Contributor

notZaki commented Jan 29, 2021

If this gets implemented, then we should keep track of the classification between iterations. That way, if the desired number of BOLD components are not found within the iteration limit, then the iteration with the most BOLD components could be returned.

Out of curiosity, how important is it that some BOLD components are found? My impression was that it doesn't really matter if the end goal is to denoise---it's more problematic to have no rejected components in that case.

@jbteves
Copy link
Collaborator

jbteves commented Jan 29, 2021

It's kind of a philosophical issue in my opinion. Presumably if you have a living person in the scanner, they will have some BOLD signal, even if it's not of neuronal origin. If you have no BOLD components, it's an indication that something is wrong.

@tsalo
Copy link
Member Author

tsalo commented Jan 29, 2021

If this gets implemented, then we should keep track of the classification between iterations. That way, if the desired number of BOLD components are not found within the iteration limit, then the iteration with the most BOLD components could be returned.

Good point. 👍 to that feature, if the number we come up with is > 1.

Out of curiosity, how important is it that some BOLD components are found? My impression was that it doesn't really matter if the end goal is to denoise---it's more problematic to have no rejected components in that case.

I agree with Josh, but also if the denoised data only includes (generally low-variance) ignored components and unmodeled variance, then we're probably talking about a very tiny amount of the overall variance, which is probably just noise.

@dowdlelt
Copy link
Collaborator

dowdlelt commented Feb 8, 2021

Good points are made here - but I think to some point tedana should be comfortable with failing to find any BOLD components! I struggle with trying to imagine how this could occur, but perhaps as SNR decreases (perhaps with higher resolution, etc), it seems possible that we could end up with frustrated users as current classification methods have trouble. To @notZaki 's point, the purpose is denoising, right? I want tedana to tell me what is bad in the data, even if it cannot tell me what is good.

I recall Kundu giving a talk showing the high-kappa timeseries, which of course requires identifying BOLD components, but I think analyses of that data has fallen out of favor.

@jbteves
Copy link
Collaborator

jbteves commented Feb 8, 2021

I'm confused, would high resolution change whether or not we'd expect BOLD-like components? I can see why having finer resolution might mess with decomposition overall (e.g., perhaps things become more spatially discontinuous?), but I'm not sure that we've seen anything to indicate that it's happening yet.

@dowdlelt
Copy link
Collaborator

dowdlelt commented Feb 8, 2021

it was meant to be more of a "maybe this could happen" example, rather than a prognostication. updated for more clarity. More to the point, it is difficult to predict what users will do/methods will develop, and I think preventing them from denoising their data, even though plenty of noise was found could be frustrating. That said - a warning, even perhaps a loud one, is important. no BOLD is a concern, but I feel that it shouldn't be a show stopper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion issues that still need to be discussed TE-dependence issues related to TE dependence metrics and component selection
Projects
None yet
Development

No branches or pull requests

4 participants