Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Add documentation page on denoising approaches #823

Merged
merged 42 commits into from
Nov 28, 2022
Merged
Show file tree
Hide file tree
Changes from 41 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
e68775f
Add page on denoising strategies.
tsalo Oct 22, 2021
bb1eed7
Add missing figure.
tsalo Oct 22, 2021
7caf71c
Add new extension to docs reqs.
tsalo Oct 22, 2021
b4103a4
Remove FSL and use actual tedana outputs.
tsalo Nov 15, 2021
482c0cb
Add in constants.
tsalo Nov 15, 2021
98eef39
Add a note.
tsalo Nov 15, 2021
e0e9a81
Draft some more stuff.
tsalo Nov 18, 2021
573d954
Use nilearn maskers when possible.
tsalo Nov 18, 2021
9fe8be9
Maybe?
tsalo Nov 18, 2021
f38c957
Update denoising.rst
tsalo Nov 18, 2021
fac34c7
Fix long lines.
tsalo Nov 19, 2021
8c1ffc7
Add notes to examples.
tsalo Nov 19, 2021
574b033
Add topic.
tsalo Nov 19, 2021
20ddc72
Fix topic?
tsalo Nov 19, 2021
d92003a
It needs that blank line.
tsalo Nov 19, 2021
144f7c8
Use admonition instead of topic.
tsalo Nov 19, 2021
bfc4a97
Use the appropriate output filenames.
tsalo Nov 19, 2021
bd70538
Update theme_overrides.css
tsalo Nov 19, 2021
de1d624
Update index.rst
tsalo Nov 19, 2021
528d1a2
Update index.rst
tsalo Nov 19, 2021
a9737d4
Update theme_overrides.css
tsalo Nov 19, 2021
c08391a
Draft new admonition.
tsalo Nov 20, 2021
0803eaf
Update.
tsalo Nov 21, 2021
478f582
Update theme_overrides.css
tsalo Nov 21, 2021
ca99041
Update denoising.rst
tsalo Nov 21, 2021
3160293
Merge branch 'main' into document-denoising
tsalo Jan 28, 2022
670ad54
Update docs/denoising.rst
tsalo Apr 28, 2022
c54f449
Drop different versions of denoising code.
tsalo Nov 19, 2022
84ac945
Merge remote-tracking branch 'upstream/main' into document-denoising
tsalo Nov 19, 2022
9326327
Update setup.cfg
tsalo Nov 19, 2022
6b3cb00
Add @handwerkerd's recommended change.
tsalo Nov 19, 2022
30f0a99
Update denoising.rst
tsalo Nov 19, 2022
ea05a7c
Some final improvements before review.
tsalo Nov 19, 2022
59bb4e9
Update denoising.rst
tsalo Nov 22, 2022
8764d82
Fix.
tsalo Nov 22, 2022
af75644
Fix.
tsalo Nov 22, 2022
84eb6c6
Apply suggestions from code review
tsalo Nov 22, 2022
8d99d34
Remove unnecessary topic. Thx @handwerkerd!
tsalo Nov 22, 2022
6dba5c9
Merge remote-tracking branch 'upstream/main' into document-denoising
tsalo Nov 23, 2022
5b228c9
Remove duplicate carpet figure.
tsalo Nov 23, 2022
2e377a5
Add note on classification.
tsalo Nov 28, 2022
33ff086
Fix link.
tsalo Nov 28, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ To set up a local environment, you will need Python >=3.6 and the following pack
* [numpy](http://www.numpy.org/)
* [scipy](https://www.scipy.org/)
* [scikit-learn](http://scikit-learn.org/stable/)
* [nilearn](https://nilearn.github.io/)
* [nilearn](https://nilearn.github.io/stable/)
* [nibabel](http://nipy.org/nibabel/)
* [mapca](https://github.com/ME-ICA/mapca)

Expand Down
42 changes: 42 additions & 0 deletions docs/_static/theme_overrides.css
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,45 @@ div.admonition.caution {
.rst-content .caution .admonition-title {
background: #FF6F6F
}

/* Formatting rules specifically for admonition in denoising.rst */
div.admonition-independence-vs-orthogonality {
border-radius: 25px;
padding: 20px;
border-style: solid;
background: #dac9ff !important;
}

div.admonition-independence-vs-orthogonality .admonition-title {
border-radius: 25px;
padding: 10px;
border-style: solid;
font-size: larger;
text-align: center;
background: #a280f1;
}

div.admonition-independence-vs-orthogonality .admonition-title:before {
content: none;
}

/* Formatting rules specifically for admonition in denoising.rst */
div.admonition-which-should-you-use {
border-radius: 25px;
padding: 20px;
border-style: solid;
background: #c8e2c5 !important;
}

div.admonition-which-should-you-use .admonition-title {
border-radius: 25px;
padding: 10px;
border-style: solid;
font-size: larger;
text-align: center;
background: #78c46f;
}

div.admonition-which-should-you-use .admonition-title:before {
content: none;
}
5 changes: 3 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
"sphinx.ext.linkcode",
"sphinx.ext.napoleon",
"sphinx.ext.todo",
"sphinx_copybutton",
"sphinxarg.ext",
"sphinxcontrib.bibtex", # for foot-citations
]
Expand Down Expand Up @@ -103,7 +104,7 @@
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
pygments_style = "tango"

# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
Expand Down Expand Up @@ -180,5 +181,5 @@ def setup(app):
"matplotlib": ("https://matplotlib.org/", None),
"nibabel": ("https://nipy.org/nibabel/", None),
"pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
"nilearn": ("http://nilearn.github.io/", None),
"nilearn": ("https://nilearn.github.io/stable/", None),
}
246 changes: 246 additions & 0 deletions docs/denoising.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,246 @@
##############################
Denoising Data with Components
##############################

Decomposition-based denoising methods like ``tedana`` will produce two important outputs:
component time series and component classifications.
tsalo marked this conversation as resolved.
Show resolved Hide resolved
The component classifications will indicate whether each component is "good" (accepted) or "bad" (rejected).
To remove noise from your data, you can regress the "bad" components out of it,
though there are multiple ways to accomplish this.

.. important::

This page assumes that you have reviewed the component classifications produced by ``tedana`` already,
and that you agree with those classifications.
If you think some components were misclassified,
you can find instructions for reviewing and changing classifications in
[our FAQ](https://tedana.readthedocs.io/en/stable/faq.html#tedana-i-think-that-some-bold-ica-components-have-been-misclassified-as-noise).

By default, ``tedana`` will perform a regression including both "good" and "bad" components,
and then will selectively remove the "bad" components from the data.
This is colloquially known as "non-aggressive denoising".

However, users may wish to apply a different type of denoising,
or to incorporate other regressors into their denoising step,
and we will discuss these alternatives here.

``tedana`` uses independent component analysis (ICA) to decompose the data into components
which each have a spatial map of weights and time series that are assumed to reflect meaningful underlying signals.
It then classifies each component as "accepted" (BOLD-like) or "rejected" (non-BOLD-like).
The data are denoised by taking the time series of the rejected components and regressing them from the data.

``tedana`` uses a spatial ICA, which means the components are `statistically independent` across space, not time.
That means the time series from accepted and rejected components can share variance.
If a rejected component shares meaningful variance with an accepted component,
then regressing the rejected components' time series from your data may also remove meaningful signal
associated with the accepted component as well.
Depending on the application,
people may take different approaches on how to handle variance that is shared between accepted and rejected components.
These different approaches towards decomposition-based denoising methods are described here.

This page has three purposes:

1. Describe different approaches to denoising using ICA components.
2. Provide sample code that performs each type of denoising.
3. Describe how to incorporate external regressors (e.g., motion parameters) into the denoising step.

.. admonition:: Which should you use?

The decision about which denoising approach you should use depends on a number of factors.

The first thing to know is that aggressive denoising, while appropriate for orthogonal time series,
may remove signal associated with "good" time series that are independent to, but not orthogonal with,
the "bad" time series being regressed out.

As an alternative, users may want to try non-aggressive denoising or aggressive denoising combined with component orthogonalization.

The main difference between the orthogonalization+aggressive and non-aggressive approaches is that,
with orthogonalization,
all the variance common across the accepted and rejected components is put in the accepted basket,
and therefore it is not removed in the nuisance regression.
In contrast, with the non-aggressive denoising,
the shared variance is smartly split during the estimation process and therefore some amount is removed.

Now we can get started!
This walkthrough will show you how to perform different kinds of denoising on example data.

For this walkthrough, we will use Python and the Nilearn package to denoise data that were preprocessed in fMRIPrep.
However, you can definitely accomplish all of these steps in other languages, like MATLAB,
or using other neuroimaging toolboxes, like AFNI.

Let's start by loading the necessary data.
No matter which type of denoising you want to use, you will need to include this step.

For this, you need the mixing matrix, the data you're denoising, a brain mask,
and an index of "bad" components.

.. code-block:: python

import pandas as pd # A library for working with tabular data

# Files from fMRIPrep
data_file = "sub-01_task-rest_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz"
mask_file = "sub-01_task-rest_desc-brain_mask.nii.gz"
confounds_file = "sub-01_task-rest_desc-confounds_timeseries.tsv"

# Files from tedana (after running on fMRIPrepped data)
mixing_file = "sub-01_task-rest_desc-ICA_mixing.tsv"
metrics_file = "sub-01_task-rest_desc-tedana_metrics.tsv"

# Load the mixing matrix
mixing_df = pd.read_table(mixing_file) # Shape is time-by-components

# Load the component table
metrics_df = pd.read_table(metrics_file)
rejected_columns = metrics_df.loc[metrics_df["classification"] == "rejected", "Component"]
accepted_columns = metrics_df.loc[metrics_df["classification"] == "accepted", "Component"]
tsalo marked this conversation as resolved.
Show resolved Hide resolved

# Load the fMRIPrep confounds file
confounds_df = pd.read_table(confounds_file)

# Select external nuisance regressors we want to use for denoising
confounds = confounds_df[
[
"trans_x",
"trans_y",
"trans_z",
"rot_x",
"rot_y",
"rot_z",
"csf",
"white_matter",
]
].to_numpy()

# Select "bad" components from the mixing matrix
rejected_components = mixing_df[rejected_columns].to_numpy()
accepted_components = mixing_df[accepted_columns].to_numpy()


*****************************************************************
Remove all noise-correlated fluctuations ("aggressive" denoising)
*****************************************************************

If you regress just nuisance regressors (i.e., rejected components) out of your data,
then retain the residuals for further analysis, you are doing "aggressive" denoising.

.. code-block:: python

import numpy as np # A library for working with numerical data
from nilearn.maskers import NiftiMasker # A class for masking and denoising fMRI data

# Combine the rejected components and the fMRIPrep confounds into a single array
regressors = np.hstack((rejected_components, confounds))

masker = NiftiMasker(
mask_img=mask_file,
standardize_confounds=True,
standardize=False,
smoothing_fwhm=None,
detrend=False,
low_pass=False,
high_pass=False,
t_r=None, # This shouldn't be necessary since we aren't bandpass filtering
reports=False,
)

# Denoise the data by fitting and transforming the data file using the masker
denoised_img = masker.fit_transform(data_file, confounds=regressors)

# Save to file
denoised_img.to_filename(
"sub-01_task-rest_space-MNI152NLin2009cAsym_desc-aggrDenoised_bold.nii.gz"
)


*********************************************************************************************************************************
Remove noise-correlated fluctuations that aren't correlated with fluctuations in accepted components ("non-aggressive" denoising)
*********************************************************************************************************************************

If you include both nuisance regressors and regressors of interest in your regression,
you are doing "non-aggressive" denoising.

Unfortunately, non-aggressive denoising is difficult to do with :mod:`nilearn`'s Masker
objects, so we will end up using :mod:`numpy` directly for this approach.

.. code-block:: python

import numpy as np # A library for working with numerical data
from nilearn.masking import apply_mask, unmask # Functions for (un)masking fMRI data

# Apply the mask to the data image to get a 2d array
data = apply_mask(data_file, mask_file)
data = data.T # Transpose to voxels-by-time

# Fit GLM to accepted components, rejected components and nuisance regressors
# (after adding a constant term)
regressors = np.hstack(
(
confounds,
rejected_components,
accepted_components,
np.ones(mixing_df.shape[0], 1),
),
)
betas = np.linalg.lstsq(regressors, data, rcond=None)[0][:-1]

# Denoise the data using the betas from just the bad components
confounds_idx = np.arange(confounds.shape[1] + rejected_components.shape[1])
pred_data = np.dot(np.hstack(confounds, rejected_components), betas[confounds_idx, :])
data_denoised = data - pred_data

# Save to file
denoised_img = unmask(data_denoised.T, mask_file)
denoised_img.to_filename(
"sub-01_task-rest_space-MNI152NLin2009cAsym_desc-nonaggrDenoised_bold.nii.gz"
)


************************************************************************************
Orthogonalize the noise components w.r.t. the accepted components prior to denoising
************************************************************************************

If you want to ensure that variance shared between the accepted and rejected components does not contaminate the denoised data,
you may wish to orthogonalize the rejected components with respect to the accepted components.
This way, you can regress the rejected components out of the data in the form of what we call "pure evil" components.

.. note::

The ``tedana`` workflow's ``--tedort`` option performs this orthogonalization automatically and
writes out a separate mixing matrix file.
However, this orthogonalization only takes the components into account,
so you will need to separately perform the orthogonalization yourself if you have other regressors you want to account for.

.. code-block:: python

import numpy as np # A library for working with numerical data
from nilearn.maskers import NiftiMasker # A class for masking and denoising fMRI data

# Combine the confounds and rejected components in a single array
bad_timeseries = np.hstack((rejected_components, confounds))

# Regress the good components out of the bad time series to get "pure evil" regressors
betas = np.linalg.lstsq(accepted_components, bad_timeseries, rcond=None)[0]
pred_bad_timeseries = np.dot(accepted_components, betas)
orth_bad_timeseries = bad_timeseries - pred_bad_timeseries

# Once you have these "pure evil" components, you can denoise the data
masker = NiftiMasker(
mask_img=mask_file,
standardize_confounds=True,
standardize=False,
smoothing_fwhm=None,
detrend=False,
low_pass=False,
high_pass=False,
t_r=None, # This shouldn't be necessary since we aren't bandpass filtering
reports=False,
)

# Denoise the data by fitting and transforming the data file using the masker
denoised_img = masker.fit_transform(data_file, confounds=orth_bad_timeseries)

# Save to file
denoised_img.to_filename(
"sub-01_task-rest_space-MNI152NLin2009cAsym_desc-orthAggrDenoised_bold.nii.gz"
)
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,7 @@ tedana is licensed under GNU Lesser General Public License version 2.1.
contributing
roadmap
api
denoising

.. toctree::
:hidden:
Expand Down
1 change: 1 addition & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ include_package_data = False
[options.extras_require]
doc =
sphinx>=1.5.3
sphinx_copybutton
sphinx_rtd_theme
sphinx-argparse
sphinxcontrib-bibtex
Expand Down