-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: labels_near_sensors function #10303
Comments
The correct way to determine what anatomical structure your fNIRS channels are sensitive too is to use a photon migration simulation. There are several tools available to do this and it's usually done when designing your experiment/montage. The tool I tend to use is the FOLD toolbox. And there are convenience functions in MNE-nirs to read the fold data and to determine what structures standard positions are sensitive to and the sensitivity to each structure. https://mne.tools/mne-nirs/generated/mne_nirs.io.fold_channel_specificity.html#mne_nirs.io.fold_channel_specificity Your suggestion would work with non standard (10-05) positions, a large benefit. But I don't think the approach you propose would be easily accepted in a fNIRS journal, as there are established methods for doing this. But until we have our own photon migration integration that would allow for non standard 10-05 optode positions mne-tools/mne-nirs#405 your suggestion would be practically useful. But I think it's better suited to MNE-Python than MNE-NIRS. |
Nice, I'll try that instead! We can reopen this later if someone is interested. Just for posterity, this is code I wrote to approximately accomplish this task:
|
The fold data is precomputed for standard 10-20 etc positions only. So I think we will be revisiting this discussion soon 😉 |
Another thing I'm having to do multiple times in fNIRS analyses is say which anatomical region a channel corresponds to, basically a label-mapping analogous to
stc_near_sensors
.The easiest solution is just to take each channel and do a
cdist
-like operation to thepial
surface offsaverage
to say which region is closest. However, I like the idea better of taking astc_near_sensors
approach, where for the Evoked data we usenp.eye(n_chan)
to obtain a(n_dense_vertices, n_channels)
array. Then the "easiest" solution just gets vertex numbers fromnp.argmax(..., axis=0)
of this array whenmode='nearest'
, but using other modes likesum
orweighted
is more like anp.argsort(..., axis=0)
-type operation where you get a varying number of labels per channel that it maps into, along with the weights in those labels.That way, the
labels_near_sensors
can:stc_near_sensors
very closely, with the addition of alabels
argument (which would typically be obtained fromread_labels_from_annot
for example).stc_near_sensors
does to project activation onto the cortical surface.This is another one that could in theory go in MNE-NIRS (cc @rob-luke), but to me MNE-Python is probably a better scope since
stc_near_sensors
was actually designed for ECoG originally, and it's nice to havelabels_near_sensors
mirrorstc_near_sensors
in terms of namespace. To avoid crowding themne.
namespace wherestc_near_sensors
lives, we could have this new function inmne.label.labels_near_sensors
.The text was updated successfully, but these errors were encountered: