-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MRG, ENH: Add NIRSport support #9348
Conversation
When the probes get saturated (i.e. when the value they return is "too high"), NIRStar replaces the value with 'NaN' in the standard .wlX file. The true value is kept in a .nosatflags_wlX file, which is a copy of a .wlX file with the true probed values. It is unclear how NIRstar decides that a probe got satured, so we added a flag to `read_raw_nirx()` and `RawNIRX.__init__` to let the user chose which file to use. Providing two *.wl1 or two *.wl2 files still raises an error, as we check for the existence and unicity of the corresponding *.nosatflags_wlX file. Note that the directory structure check doesn't yet discriminate between *.wlX and *.nosatflags_wlX (as it checks for any file ending with 'wlX'). Changing the check could be done in another PR to make sure the user did not provide only *.nosatflags_wlX files and maybe warn them.
Some devices register 'NaN' values when the probes saturate. Since 'NaN' values can cause unexpected behaviour, we added the ability to annotate them so that they can be filtered out during the process.
…port v1 testing data
@rderollepot thanks for your support with this PR (and I apologise for jumping between branches, hopefully someone better at git can merge this back in to your PR). Im going to try and explain my understanding below and see if we can debug my issue. I will explain things you are already aware of, so that others (@larsoner) can also follow along. I have added some tests for the files which you contributed and diligently documented in mne-tools/mne-testing-data#78 However, my tests are failing. When I read the data it appears to not contain any NaNs in the channel combinations specified in the montage. I have confirmed that your code does read the correct wave file based on the saturation argument 👍 . And I have confirmed that NaNs exist in the wl1 file 👍 and not in the nosatflag file 👍 . The data on disk (specifically the files Is it possible that in the test data the channels in your montage never saturated? I dont know the software/hardwware you used, so Im not sure if a big red light flashes, or if you just have to guess. I'd really like to ensure there is a test that returns Nans so we can ensure the saturation flag works. To make things move quicker, would you like to email me another file (doesn't have to be a test file like before, just anything you have where you are certain it saturates) so I can confirm the behaviour in this PR. Im totally open to other suggestions too. Or let me know if I misunderstand the data, I haven't used this device myself. |
@rob-luke it looks like we're close (today or tomorrow maybe) to pushing out 0.23. Do you think we should wait for this, or merge this early in the 0.24 cycle? |
Your suggestions are small, so that wouldn't take long. But what I am unsure about is how to test all these options for dealing with saturated sensors. Currently the test data doesn't enable us to test the different options, that makes me nervous. However, I just got off a phone call with NIRx and they said that the sensors very rarely saturate, they said they might go a year without seeing saturation. But is that your experience @rderollepot? Or do you see this more often? I will email NIRx to ask about this specifically. I'd love to get this in for the release, but unless we get some way to test it in the coming days, then I guess we need to push to 0.24. Im not sure 😰 |
They just responded. Apparently saturation was more common in nirsport1. So we should be sure to ensure we handle it correctly. |
First of all, thanks a lot @rob-luke and @larsoner for your work on this PR !
Yes, that is what I would have expected too 👍
I thought it did, but while reading your very clear analysis I realized there was a mistake in the process I followed to record those test datasets. To verify that I could trigger some saturations (which I was unsure), I just checked for NaNs presence in the .wl1/.wl2 files as you did, but I didn't verify that those channels were indeed in my montage as you pointed out, therefore not ensuring that those saturations are problematic. Sorry for that, I'll plan to record new test datasets if those cannot be used.
I definitely have experimental recordings with saturated channels, I'll look for the cleaner one I can find and email it to you quickly.
We have two NIRx fNIRS in the lab: the first NIRSport, and the NIRScout. We rarely end up with saturations (if any ?) using the NIRScout, but we often have some using the NIRSport v1, which confirms the answer you just got from NIRx. |
Great, I'm glad we are all of the same understanding.
If possible this would be great. But I am aware that getting time in the lab can be tough. So let me know if not possible. Perhaps I’m being too pedantic with tests? We would only need to replace one measurement. Then we would have 1) a recording with no saturation (already have) 2) a recording with saturation, but not on montage channels (already have) 3) recording that has saturation in the channel of interest (new measurement, could just overwrite one that you previously committed). 3 would allow us to test the new code features. Thoughts? |
Instead of recording new data, could you modify the existing montage definition (can be on the fly in the code in memory with |
Both @rderollepot and NIRx have sent me data to test with locally. If these larger files operate correctly locally then I'd be more confident with a testing hack as you describe. I'll report back here in a week on my progress. |
Co-authored-by: Eric Larson <larson.eric.d@gmail.com>
I took a stab at it @rob-luke, see if it makes sense to you |
mne/io/nirx/nirx.py
Outdated
dur -= on | ||
onset.extend(on) | ||
duration.extend(dur) | ||
description.extend(['BAD_NAN'] * len(on)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this be more specific? Maybe helpful if there are other causes of NaN in other data types M/EEG etc.
description.extend(['BAD_NAN'] * len(on)) | |
description.extend(['BAD_SATURATION'] * len(on)) |
Code looks good. I am just testing the behaviour locally against some additional private files that rderollepot kindly shared. |
Behaviour looks good to me locally. Here is an example Thanks @larsoner for doing the heavy lifting. |
I would approve, but you cant approve your own pull requests 😄 |
I also realized this is a perfect use case for channel specific annotations. I'll add that next I think |
Okay made them channel-specific. If we know that they are always saturated in source-detector pairs like this (or that we should always annotate both as saturated if one of them is):
Then I could simplify the code a bit to produce half the number of channel-specific annotations. Currently it makes one annotation for S5_D2 760, and a separate annotation for S5_D2 850, but if these are always paired, we could make a single annotation for the time window that lists S5_D2 760 and S5_D2 850 together. |
Great ! I do not know the 'hardware based' answer to that question, but I made a simple script to find out where NaNs were located in both wl1 and wl2 matrices, and it turns out that, on the 47 datasets I have that saturated, wl1 and wl2 always share the exact same 'NaN mask'. It is not enough to guarantee that they are always saturated in source-detector pairs though. |
My vote is to make it so that if either is saturated they both get annotated as saturated. It seems to be the case in 47/47 of your cases so in practice might be true anyway, and even if it isn't the case for some files, we probably will end up ignoring that S-D pair downstream anyway. WDYT? |
Agreed.
Correct. The current behaviour is to mirror the behaviour within S-D pairs. So if one channel is marked as bad, then so is the other. See... mne-python/mne/preprocessing/nirs/nirs.py Lines 198 to 203 in 8ccc399
|
Actually I need to add more internal calls to those functions. I'll do that in another PR. |
I also agree with that 👍 |
.. _David Julien: https://github.com/Swy7ch | ||
|
||
.. _Romain Derollepot: https://github.com/rderollepot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@swy7ch @rderollepot let me know if you want different URLs here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is great for me, thanks 👍
I also want to take this opportunity to say that I am really grateful for your help @larsoner and @rob-luke in seing that the great contribution @swy7ch started almost a year ago can come to an end, improving it with new features along the way. We are still very new to MNE in the lab (it started with David's internship), and I am not sure I would have been able to finish it myself (or it would have take me ages 😅).
* upstream/main: MRG, ENH: Make _get_hpi_info public (mne-tools#9369) ENH: Add a playback button to the notebook 3d backend (mne-tools#8741) better docs for permutation_cluster_test (mne-tools#9365) MRG: Add fNIRS to html output (mne-tools#9367) When plotting GFP comparison in Report, don't show sensor layout by default (mne-tools#9366) DOC: Update Mayavi troubleshooting section (mne-tools#9362) more tutorial tweaks (mne-tools#9359)
🥳 |
* upstream/main: FIX: make epoch cropping idempotent (mne-tools#9378) MRG, ENH: Add NIRSport support (mne-tools#9348) MRG, ENH: Make _get_hpi_info public (mne-tools#9369) ENH: Add a playback button to the notebook 3d backend (mne-tools#8741) better docs for permutation_cluster_test (mne-tools#9365) MRG: Add fNIRS to html output (mne-tools#9367) When plotting GFP comparison in Report, don't show sensor layout by default (mne-tools#9366) DOC: Update Mayavi troubleshooting section (mne-tools#9362) more tutorial tweaks (mne-tools#9359) MRG, MAINT: Use native GitHub Actions skip (mne-tools#9361) MAINT: Clean up crufty code [circle front] (mne-tools#9358) API: Complete deprecations (mne-tools#9356) Add qdarkstyle, darkdetect to environment.yml [circle full] (mne-tools#9357) FIX: Fix FIX: Add
I am trying to determine how to use the GitHub magic to do some work on #7936. I tried pushing to uge-lescot#1 but made a mess of it. Now I am trying this.
Ok, this hasn't added to the existing PR. Instead it has opened a new one. I think this is a bad idea because I would like the original authors to get the internet points associated with merging the code.
How am I able to work on Swy7ch's #7936 PR? I just wish to get all the tests green for them.