-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add MEGnet to make MNE-ICALabel work on MEG data #207
base: main
Are you sure you want to change the base?
Conversation
Thanks for the PR, it looks like a very good start! Would it be possible to take a known input, load it in MNE, load it in BrainStorm; then run the pipeline on MNE (this PR) and on BrainStorm, export the output from BrainStorm and compare it to the output from MNE (this PR)? |
This MEGnet is an open-source model, independent of BrainStorm, built on a Keras-based architecture. The input includes the ICA topomap from BrainStorm, a 120x120 RGB array, along with the corresponding ICA time series array. The output provides the probabilities that the component is classified as brain/other, eye blink, eye movement, or heart. I performed “Feature Simulation” because the model was trained on ICA topomaps RGB array from BrainStorm. When I used MNE ICA topomaps to generate RGB images, the classification performance was not good. To improve this, I ran simulations in features.py. Below is the result using MNE sample MEG data: To fully follow the MEGnet pipeline for comparison, we could start with BrainStorm, going from raw ➡️ ICA ➡️ topomaps & time series, and then use the original Keras model alongside the method in this PR for comparison. However, I have never used BrainStorm before. Perhaps I can write a comparison using the feature extraction method from this PR ( |
Thanks for the clarification. I'm not worried of a difference between Keras and ONNX, but indeed we can test this:
The good thing is that this is easy as the input X can be absolutely anything with the correct input shape and doesn't need to correspond to actual brain data. Next, if this model was trained on topographic map from BrainStorm, I would expect it to behave differently if you feed slightly different topographic maps; as you comment. In this sense, we need to replicate the topographic map from BrainStorm, starting from an MNE object. This is what is done for the ICLabel model where those functions https://github.com/mne-tools/mne-icalabel/blob/main/mne_icalabel/features/topomap.py replicated the topographic map from EEGLab. To test the replication, we extracted topographic maps from EEGLab (in the end it's just a .mat file), and compared them with what we computed with the function in topomap.py. |
Oh! I see, this will be a more systematic technical validation. I will follow these two steps to verify next. Thanks! |
Hi! I recently had some time to complete the two-step verification process. You can see the details in a
There are some differences between the topomaps in MNE and BrainStorm. This is because, although I independently performed the same preprocessing and ICA steps on both platforms, I couldn't find a way to set the |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Apologies for the multiple pushes😢. This is my first time creating a PR, and I’m still not very familiar with Git. |
Thanks for this work @colehank! If I understand the code in your temp repo: Presumably the main difference is in the computation of the ICA, and corresponding topomaps within BrainStorm GUI? Is it possible to also compare the MEGNet within BrainStorm and the MEGNet within mne-icalabel (this PR)? output on the same topomaps? E.g. using either/or the topomaps you computed from BrainStorm, or the topomaps from mne-icalabel? This would verify that the network is fully the same. One thing you can do is save 1 input topomap, and 1 output from the network, and we can just use this "small test data" as a unit-test within this repo. For example, we do this here for ICLabel. Now, the next, more difficult question is how to proceed if the topomaps are different. My question would be: i) are there substantial differences that will lead to performance changes between MNE approach vs BrainStorm approach? and ii) if not, do we care about matching exactly that in BrainStorm. |
Don't worry about it :) I started a thorough review on Thursday, before I got interrupted by faults on our MEG system 😢 @adam2392 That's one part I did not get at the beginning either, but the model/network is not available in BrainStorm. The people who developed it made a strange choice: create topographic map in brainstorm, export them as PNG of shape (120, 120, 3) and export the ICA traces; then train a network in Python on the colored images and on the traces. Thus, the network is already in Python, not much to compare here ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll post those initial comments from last Thursday, you could already start tackling them :)
for more information, see https://pre-commit.ci
Thanks for your suggestions! @mscheltienne @adam2392 There are indeed differences in the implementation of infomax between BrainStorm and MNE when we plot components, but from my experience, this difference doesn't seem to affect MEGnet's classification performance, as demonstrated in the fig before. Based on @mscheltienne comments, I've made an updated iteration, and I hope it meets the required functionalities. |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Thanks for tackling my comments, I'll try to continue the review as soon as possible! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments. Overall looking good and thanks for answering questions!
mne_icalabel/megnet/_utils.py
Outdated
def cart2sph(x, y, z): | ||
xy = np.sqrt(x * x + y * y) | ||
r = np.sqrt(x * x + y * y + z * z) | ||
theta = np.arctan2(y, x) | ||
phi = np.arctan2(z, xy) | ||
return r, theta, phi | ||
|
||
|
||
def pol2cart(rho, phi): | ||
x = rho * np.cos(phi) | ||
y = rho * np.sin(phi) | ||
return x, y |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These functions are also defined for ICLabel, so I wonder if we can pull them out for a general utility functions related to geometry.
def _pol2cart( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pol2cart
is indeed a duplicate, cart2sph
returns the element in a different order and it's a bit annoying to change the order. We can keep code de-duplication for a future PR.
mne_icalabel/megnet/_utils.py
Outdated
return x, y | ||
|
||
|
||
def make_head_outlines(sphere, pos, outlines, clip_origin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you write a docstring for the function?
mne_icalabel/megnet/features.py
Outdated
def _check_notch( | ||
raw: BaseRaw, picks: str, neighbor_width: float = 2.0, threshold_factor: float = 3 | ||
) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering why we don't just notch filter for them at the line_noise
and its harmonics?
Would this impact the results?
I guess I can imagine a user constructing the Raw object via RawArray
, which wouldn't need them to specify the line noise frequency, so maybe this wouldn't work all the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Elekta/MEGIN system do store the line noise frequency, but I don't know if this is the case for other systems. I would keep it is as it is now, with the responsibility to apply the notch filter left to the user.
@colehank Hi! Apologies for the slow review pace, my bandwidth is limited, I pushed 4 commits which:
But more importantly, you'll see that I added a file In this case, for the
Could you try to write such test for:
You could eventually test more precisely
Again, you could test more precisely Finally, at the moment
Could you rework the output to match the output from ICLabel for consistency across the API? ICLabel returns:
Happy to help/provide more details or resources! |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Thank you all for your helpful suggestions, and apologies for the delay in my response as I’ve been resting due to illness. I’ve just submitted the unit test script for the main functions in Additionally, I made the following updates:
|
first time writing
I'm not quite sure why this is happening... It seems that the model path is not being recognized correctly. Could it be because my branch is not in sync with the main branch? |
Thanks, I'll try to have a look tomorrow!
That's fine and expected. Looking briefly at the test I see:
You'll find line 34 in this file a spelling error, suggestions are
Which you can probably reproduce locally by running:
|
Sorry, this was a small mistake I made. I have already corrected it in the latest submission.
I think it was caused by the resource
but, when I try
|
No need to apologizes, that's fine and it does seem to be environment issues. The online CIs don't suffer from those, so they will tell us if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's looking good. Just have to wrangle with our testing framework.
Thanks for the hard efforts @colehank!
|
||
Parameters | ||
---------- | ||
raw : Raw. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raw : Raw. | |
raw : Raw |
|
||
from .features import get_megnet_features | ||
|
||
_MODEL_PATH: str = files("mne_icalabel.megnet") / "assets" / "megnet.onnx" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this line causes issues.
See:
network_file = files("mne_icalabel.iclabel.network") / "assets" / "ICLabelNet.onnx" |
Let us know if that doesn't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think he modified anything from iclabel
, that should not be failing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes nvm. I misread the error.
Hmm it seems the file is somehow not found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I found the issue! After I added the __init__.py
file in the mne_icalabel/megnet
directory, it was able to correctly find megnet.onnx
. (but I remember that python(at least v3.8) don't require init.py to treat a folder as a package.)
"""Test whether the function returns the correct artifact index.""" | ||
real_atrifact_idx = [0, 3, 5] # heart beat, eye movement, heart beat | ||
prob = megnet_label_components(*raw_ica) | ||
this_atrifact_idx = list(np.nonzero(prob.argmax(axis=1))[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit strange here. When I ran it yesterday, the model detection results were 0, 3, 5; but today they have become 0, 3, 4, 5, which caused the pytest to fail. It might be due to the randomness in the ICA algorithm? (even though I set the random_state).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can try asserting set(real_atrifact_idx).issubset(set(this_atrifact_idx))
to ensure that the true labels are a subset of the model's predicted labels. This is a debugging-specific approach, but it might not be entirely reasonable...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If your ICA has a fix random state, and if the model is deterministic (which I think it is?), you should get the exact same results.
To debug you can try:
- run 2/3 times the 'icai' part and compare the mixing/unmixing matrixes with
assert_allclose
. They should be identical. - run 2/3 times the model part and compare the model outputs (not the index, but the probabilities per class), they should be identical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I followed your debugging method, and it turns out that both ICA with a fixed random state and MEGNet are deterministic.
To investigate further, I ran tests with both the pytest script in the PR and a new script I created locally.
- the model outputs were consistent across both setups.
- in the model's output, specifically in the 4th row
[0.5, 0.5, 0.0, 0.0]
, the values forbrain/other
andeye movement
are the same 0.5. Logically, when usingoutput.argmax(axis=1)
, the returned array[4] should be 0, as it is the index of the first maximum value. My local script behaves as expected, returningarray[4] = 0
. However, in the PR's tests,array[4]
is 1.
This seems to be the source of the issue, despite using the same argmax
method, the pytest runs in the PR are not following the expected behavior of
numpy.argmax
?
Closes #134
PR Description
MEGnet Integration
Hi everyone,
I integrated MEGnet into
mne_icalabel/megnet
. Here’s a summary of the main work:ONNX Conversion: Transformed the original Keras model into ONNX format and confirmed that the prediction outputs remain consistent.
Feature Simulation: Added
megnet/features.py
to simulate the BrainStorm ICA topomap used in MEGnet training.Prediction Function Simplification: Refined MEGnet prediction logic in
megnet/label_components.py
for simplicity. Also, the outputs now align with the current EEGlabel_components
, using the keys:label
y_pred_proba
You can run the following commands to check out the prediction performance of MEGnet on MNE’s sample data:
However, to successfully integrate into mne-icalabel, changes need to be made to the contents of
mne-icalabel/label_components
. I’m worried that, given my understanding of the project, I might mess it up, so all the work is only within mne_icalabel/megnet.Merge checklist
Maintainer, please confirm the following before merging: