Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding modules for real time specific processing #26

Open
timonmerk opened this issue Mar 17, 2021 · 5 comments
Open

Adding modules for real time specific processing #26

timonmerk opened this issue Mar 17, 2021 · 5 comments

Comments

@timonmerk
Copy link
Contributor

timonmerk commented Mar 17, 2021

Hereby I want to propose some additional files for the mne-realtime package that would allow for fast computation and data handling of processed data.

rt_filter.py
The MNE filter functions seem fairly slow and thus could resemble some restrictions for high sampling rate processing. Instead bandpass filter can be calculated before the real time processing, and then only be applied using a fast numpy convolution.
(see https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/filter.py as example implementation)

rt_normalization.py
During real time analysis it becomes necessary to normlize data according to a certain timeframe, here the mean or median option could be handy.
(see https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/realtime_normalization.py as example implementation)

rt_features.py
For neural decoding different kind of features (frequency, time, spatial domain) need to be computed, optimally across multiple threads. This could be achieved using a class that calls predefined features routines.
(see https://github.com/neuromodulation/py_neuromodulation/blob/main/pyneuromodulation/features.py for example implementation)

rt_analysis.py
When analyzing streaming data, the data processing needs to be defined. A pandas dataframe with predictions, features and timestamps could be defined for saving data. After data acquisiton ends, the data can be saved using mne_bids. Decoding predictions/performances could be saved in the BIDS derivatives.

@jasmainak
Copy link
Member

Do you have some benchmark scripts that compare the timing with your implementation against MNE?

@timonmerk
Copy link
Contributor Author

timonmerk commented Mar 18, 2021

@jasmainak To refer to the specific files:

rt_filter.py Currently no, I can provide such a file though.
rt_normalization.py The type of normalization implemented here only takes into consideration n previous samples, this type of normalization is to the best of my knowledge not implemented in MNE, and mainly comes into play for real time applications.
rt_features.py Here the idea would be to run multiple feature estimation routines in parallel, and they might rely on mne functions. But I also see that the sklearn.pipeline.FeatureUnion could infact replace such a file.
rt_analysis.py This would be only a module that calls / starts the real time acquisition and lateron saves it using e.g. mne_bids.

@teonbrooks @jasmainak Would do you think about the separate modules? I can create separate pull requests to demonstrate the functionality interplay.

@teonbrooks
Copy link
Member

I like the architecture of these files and it would be a great starting point.

are these listed in the same order you would expect them to be applied as well?

@timonmerk
Copy link
Contributor Author

So in general a script would be started by the initiation of rt_analysis.py. In there the LSL or FT Client would be initialized, then in a loop new data would be called by e.g. client.get_data_as_epoch() .
Based on this data the normalization rt_normalization.py, then feature extraction (in the simplest case a bandpass filtering rt_filter.py) and then after finishing (which could be time based, or by a key press), the script would save the data again in rt_analysis.py in a MNE RawArray.

Maybe a further word on normalization, of course the mne.decoding.scaler could be also called on all previous acquired data, but some of the previous analysis we did in the lab showed that it might be beneficial, and maybe also more computational efficient, to only normalize according to the last n seconds.

@timonmerk
Copy link
Contributor Author

Maybe a bit more general question, what do you think about the idea to specify certain common preprocessing and feature parameters in a settings file?
I know that this could be easily done by hand, but some operations, e.g. resampling, normalization method, notch filter, bandpass filter bands would be very handy to specify a priori.
For my previous analysis I implemented such a file https://github.com/neuromodulation/py_neuromodulation/blob/main/examples/settings.json. It would be great to have feedback if something like this could be beneficial in mne rea-time :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants