Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utility for testing EEG data-cleaning pipelines? #193

Open
a-hurst opened this issue Jun 1, 2021 · 6 comments
Open

Utility for testing EEG data-cleaning pipelines? #193

a-hurst opened this issue Jun 1, 2021 · 6 comments

Comments

@a-hurst
Copy link

a-hurst commented Jun 1, 2021

Hi there,

Just stumbled upon this project: looks super-useful for reproducible science! I'm currently a collaborator on PyPREP (an MNE-Python reimplementation of the MATLAB PREP pipeline), and am wondering how useful moabb would be for evaluating generalized (i.e. BCI-unrelated) EEG preprocessing pipelines?

It would be highly useful for us to have a tool that benchmarks how well a given filtering method or noisy channel detection method improves a dataset's SNR, and it seems like comparing the effects of our preprocessing on BCI classification accuracy would be a good way of doing that. Is this kind of workflow something moabb is designed to support?

Thanks in advance!

@sylvchev
Copy link
Member

sylvchev commented Jun 2, 2021

Hi, thank for contacting us. PyPREP is a thrilling project and brillant idea to have an automatic cleaning process for EEG.

If you want to use BCI as a proxy for more generic EEG, and see how does perform your pipelines with and without the cleaning, MOABB is definitely a very good match! You could use the predefine pipelines for processing BCI datasets and it is possible to modify those pipeline to include your PyPREP cleaning stage. The pipeline should be following scikit-learn format and could take numpy arrays or MNE epochs as input.

We could discuss this in visioconference during in office hours (see #191), on Gitter or in this issue.

@a-hurst
Copy link
Author

a-hurst commented Jun 2, 2021

Hey @sylvchev, thanks for the quick response! I've got some other academic projects to tackle first before I can delve too far into this, but I'll get in touch when I've got some time.

Very excited by the prospect of having a real-world benchmark for pipeline performance: there are quite a few areas where the original PREP makes some odd computation choices, and we've been trying to decide whether to replicate its behaviour or use a method that makes more sense to us. A tool like this would make answering those questions much easier.

@sylvchev
Copy link
Member

sylvchev commented Jun 4, 2021

We discuss about this during office hours, as you may read in the wiki. This could be really interesting as @jsosulski made a script to generate plots, in order to have a visual sanity checks of all the subject's data for all datasets. Right now, it is only for P300 and there is already some interesting finding regarding data quality, as shown in #184

We will be happy to help you to use MOABB for benchmarking the computation choices of PyPREP. Do not hesitate to ping when you are ready to work on this, or to come and say hi! during office hours #191

@a-hurst a-hurst mentioned this issue Jun 9, 2021
@a-hurst
Copy link
Author

a-hurst commented Jun 10, 2021

@sylvchev Thanks again for meeting with me today! I've written a basic test script to get PyPREP working with moabb, but I'm running into a strange error relating to the CSP process:

Full Traceback (click to unhide)
Traceback (most recent call last):
  File "pyprep_test.py", line 88, in <module>
    results = evaluation.process(pipelines)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/moabb/evaluations/base.py", line 136, in process
    for res in results:
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/moabb/evaluations/evaluations.py", line 43, in evaluate
    score = self.score(clf, X[ix], y[ix], self.paradigm.scoring)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/moabb/evaluations/evaluations.py", line 70, in score
    error_score=self.error_score,
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/utils/validation.py", line 72, in inner_f
    return f(**kwargs)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 406, in cross_val_score
    error_score=error_score)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/utils/validation.py", line 72, in inner_f
    return f(**kwargs)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 248, in cross_validate
    for train, test in cv.split(X, y, groups))
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/parallel.py", line 1041, in __call__
    if self.dispatch_one_batch(iterator):
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/parallel.py", line 859, in dispatch_one_batch
    self._dispatch(tasks)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/parallel.py", line 777, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 208, in apply_async
    result = ImmediateResult(func)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/_parallel_backends.py", line 572, in __init__
    self.results = batch()
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/parallel.py", line 263, in __call__
    for func, args, kwargs in self.items]
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/parallel.py", line 263, in <listcomp>
    for func, args, kwargs in self.items]
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 531, in _fit_and_score
    estimator.fit(X_train, y_train, **fit_params)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/pipeline.py", line 330, in fit
    Xt = self._fit(X, y, **fit_params_steps)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/pipeline.py", line 296, in _fit
    **fit_params_steps[name])
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/joblib/memory.py", line 352, in __call__
    return self.func(*args, **kwargs)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/sklearn/pipeline.py", line 740, in _fit_transform_one
    res = transformer.fit_transform(X, y, **fit_params)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/mne/decoding/mixin.py", line 33, in fit_transform
    return self.fit(X, y, **fit_params).transform(X)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/mne/decoding/csp.py", line 177, in fit
    sample_weights)
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/mne/decoding/csp.py", line 538, in _decompose_covs
    eigen_values, eigen_vectors = linalg.eigh(covs[0], covs.sum(0))
  File "/Users/austinhurst/.local/share/virtualenvs/moabb_pyprep-DCPtFWTT/lib/python3.7/site-packages/scipy/linalg/decomp.py", line 581, in eigh
    'or eigenvectors were computed.'.format(info-n))
numpy.linalg.LinAlgError: The leading minor of order 63 of B is not positive definite. The factorization of B could not be completed and no eigenvalues or eigenvectors were computed.

This happens for subjects 1, 3, and 4, but not subject 2 for some reason.

import warnings

import numpy as np
import mne
from mne.decoding import CSP
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.pipeline import Pipeline

from pyprep import PrepPipeline

from moabb import set_log_level
from moabb.paradigms import LeftRightImagery
from moabb.evaluations import WithinSessionEvaluation
from moabb.datasets import PhysionetMI

set_log_level("error")
warnings.filterwarnings("ignore")


# Initialize PREP Pipeline setup

bci2000 = PhysionetMI()
montage = mne.channels.make_standard_montage("standard_1020")
raw_info = bci2000.get_data(subjects=[1])[1]['session_0']['run_4'].info

eeg_index = mne.pick_types(raw_info, eeg=True, eog=False, meg=False)
ch_names_eeg = list(np.asarray(raw_info["ch_names"])[eeg_index])
sample_rate = raw_info["sfreq"]
powerline_hz = 60

prep_params = {
    "ref_chs": ch_names_eeg,
    "reref_chs": ch_names_eeg,
    "line_freqs": np.arange(powerline_hz, sample_rate / 2, powerline_hz),
}


# Initialize custom class for PyPREP integration

class PhysionetPREP(PhysionetMI):
    
    def _load_one_run(self, subject, run, preload=True):
        raw = PhysionetMI._load_one_run(self, subject, run, preload)
        msg = "\n\n### Running PREP for Subject {0}, Run {1} ###\n"
        print(msg.format(subject, run))
        prep = PrepPipeline(raw, prep_params, montage, random_state=42)
        prep.fit()
        prep.raw_eeg.info['bads'] = [] # Clear any remaining bads
        return prep.raw_eeg


# Initialize dataset/paradigm

bci2000 = PhysionetMI()
bci2000.subject_list = list(range(1, 9))

bci2000_prep = PhysionetPREP()
bci2000_prep.subject_list = list(range(1, 9))
bci2000_prep.code = "PREP Physionet Motor Imagery"

datasets = [bci2000_prep, bci2000]

paradigm = LeftRightImagery()


# Initialize pipeline evaluation

evaluation = WithinSessionEvaluation(
    paradigm=paradigm,
    datasets=datasets,
    overwrite=True
)


# Initialize BCI pipelines

pipelines = {
    'csp+lda': Pipeline(steps=[
        ('csp', CSP(n_components=8)),
        ('lda', LDA())
    ])
}


# Actually run the pipeline

results = evaluation.process(pipelines)
breakpoint()

This was done using the latest GitHub versions of PyPREP and MNE. I also get the error when running this built-in MNE CSP example with PyPREP-processed Raw objects so I know it's not a moabb-specific issue. Any ideas what might be going on here?

(Note: if trying to run the this script yourself, please use the latest GitHub version of PyPREP instead of the one on PyPI, which missing a lot of major bugfixes and improvements)

EDIT: After a bit more reading, I'm starting to expect that the problem is that PyPREP's interpolation of bad channels is causing the problem. Since an interpolated channel is going to be highly predictable by the values of the other channels, that makes the covariance essentially zero and throws a wrench in Numpy's linalg.eigh calculations. Can you think of any way around this? It's also possible to avoid interpolation and just flag/drop all bad channels, but that seemed to cause problems with MOABB due to differences in matrix sizes.

@jsosulski
Copy link
Collaborator

Hi @a-hurst, I did not have time to run the code, but considering the error message, the issue is probably that your data is no longer full rank after using pyprep. This can e.g. happen when you perform an ICA decomposition and remove ICA components that you think are an artifact. If you then project back into the sensor space, the data has no longer full rank. The best solution would be to perform CSP+LDA etc in the ICA space. A quick fix / check could be to add some minor noise to the data or instead of removing the artifactual components, scaling them down to 0.0001 or something.

@sylvchev
Copy link
Member

sylvchev commented Jun 11, 2021

I agree with your edit and @jsosulski: the interpolation of bad/missing channels relies on a linear combination that makes the matrix rank deficient, and covariance-based pipelines could not apply then.
A first possibility is to use source-based pipelines to avoid problem, as Jan suggested (with ICA for example) or reduce the dimensionality of your data. If the matrix are not rank deficient but just ill-conditionned, you could use Riemannian pipelines using "logeuclid" or "logdet" metrics in Pyriemann. It does not make miracles but it is more robust than plain Riemannian metrics (the AIRM or Fisher metric in Pyriemann). You could also flag the channel that are problematic for each subject are remove them from the whole dataset (you could select you the electrodes you want in moabb.paradigm), but if your data are noisy you could end up excluding almost all electrodes.

I'm working on an estimator of the mean that could handle such situations: http://www.acml-conf.org/2020/video/paper/yger20a but we don't have a pipeline ready yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants