Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Add tutorial on time-frequency source estimation with STC viewer GUI #10920 #11352

Closed
wants to merge 129 commits into from

Conversation

alexrockhill
Copy link
Contributor

Follow-up to #10920 accidentally merged.

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Dec 2, 2022

Ok, this is good to go by me, sorry I accidentally merged the original PR, please review/merge when you're ready @agramfort and @britta-wstnr, no rush

Also tagging @larsoner

@britta-wstnr
Copy link
Member

Sorry it took so long, @alexrockhill, but I now finally had the chance to take a look at the GUI.
Thanks for posting the MWE in the other thread, that was extremely helpful!
I like the drop-down menu for the epochs/average/ITC.

Here are my points and some nitty-gritty details:

  1. When the GUI opens on my machine (I am on MacOS Ventura), the three orthogonal MRI slices are quite distorted (see below for a screenshot) and I have to resize.
  2. When I click on the TFR, it takes 2-3 seconds until the other views update. I think that is really long.
  3. When I click "Go to maxima", it also takes 2-3 seconds until something happens.
  4. Changing the numbers in tmin and tmax at the button leaves me with two blinking cursors in those boxes.
  5. It seems like I have to re-choose a baseline after changing the time values for it to take effect.
  6. No matter what baseline I try, the values never seem to get negative (and the colormap always stays black-yellow).
  7. The keyboard short cuts mentioned in Help do not work for me.

image

@drammock
Copy link
Member

I tried to test this today using the test script @alexrockhill provided here: #10920 (comment). Here are a few videos showing problems I encountered.

Scroll wheel on MRI slices causes weird shift

scroll-wheel.webm

cmap midpoint slider has no effect

midpoint-slider-does-nothing.webm

clicking the T/F plane moves crosshairs to wrong location

update after click is also slow, ~3 seconds on my (very fast Desktop) machine, but you can't tell that in the video because it doesn't visualize when the clicks happened.

click-target-missed.webm

@larsoner
Copy link
Member

larsoner commented Jan 2, 2023

clicking the T/F plane moves crosshairs to wrong location

update after click is also slow, ~3 seconds on my (very fast Desktop) machine, but you can't tell that in the video because it doesn't visualize when the clicks happened.

@alexrockhill have you line profiled the code to see where the slowdowns are during interactive updates?

@alexrockhill
Copy link
Contributor Author

@alexrockhill have you line profiled the code to see where the slowdowns are during interactive updates?

I'll try that next when I have a minute!

@alexrockhill
Copy link
Contributor Author

So I'm fixing all the other things but with regards to @drammock 's comment about the weird shift when you zoom, that's a feature not a bug. I really dislike in freeview when you zoom, it can be really difficult to see the voxel you have selected because it doesn't do this re-centering which is very frustrating to me because you end up spending all your time just trying to get the image centered. So I'd prefer to re-center when you zoom so that the cursor is always in view.

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Jan 6, 2023

This demonstrates the issue:

Screencast.from.01-06-2023.10.18.26.AM.webm

@drammock
Copy link
Member

drammock commented Jan 6, 2023

about the weird shift when you zoom, that's a feature not a bug. [...] I really dislike in freeview when you zoom, it can be really difficult to see the voxel you have selected

OK, but there are smoother ways to solve that than a single shift of the image on first zoom interaction. Play around with google maps a bit. The map doesn't shift dramatically on the first turn of the scroll wheel to move the point where your mouse cursor was to lie at the center of the window. Instead, the thing that was right under your cursor stays right under your cursor (the zooming is centered where the cursor is).

If the goal is to keep the selected voxel / crosshairs in view, then implement a "zoom to cursor" type interaction but pretend that the cursor is at the crosshairs regardless of where it actually is. To me that is better than what you have now, but less natural than the proposal in the following paragraph:

Possibly even better, I think, would be zooming to where the cursor actually is for the slice that is under the mouse pointer, and zooming the other slices in a way that keeps the crosshairs in sync in terms of their x/y positions within each slice window. So in the PI and LI windows, the horizontal crosshair would always be aligned, and in the LI and LP windows the vertical crosshair would always be aligned. This gets to one of my peeves about freeview, which is that if transverse/horizontal view is nose-up, then it should not be in the lower-left corner (below saggital) but rather in lower-right (below coronal). If arranged that way (basically switch the lower two quadrants with each other) then the crosshair alignment makes a lot more sense. EDIT: this does allow the crosshairs to go out of view, if the user's cursor is not on/near the crosshairs when they spin the scroll wheel. But it allows both zooming on the selected voxel and zooming on some other location that you happen to want to inspect (without having to change the voxel selection first).

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Jan 9, 2023

That all sounds very reasonable @drammock but it will apply to this as well as the ieeg locate GUI and so I think that would be best for a follow-up PR. Let's get all the things that are bugs fixed and then merge and then we can iterate on improving features. I'm tempted to say the same goes for speed but I'm giving that a bit more of a look now. Yes, ideally it wouldn't take 3 seconds to move the cursor around, but the rendering takes almost a second and there isn't much improvement that can be done for the 1.92 seconds the np.take calls take, I think it's just the case that if you pass the full 5D data, you'll have to be patient. You can pass just the average, that's supported too and things will be a lot faster... EDIT: We're talking about a GUI in python here with large data, this should be written in C++ if we really wanted things to be fast but a slow GUI with options to pass smaller data and have it go faster is better than no GUI. I think getting this merged and getting feedback from a wider audience than core devs would be really helpful too.

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Jan 9, 2023

So the middle color scale only applies to the 3D rendering, on mne.viz.Brain it appears that all three are broken for some reason on this data so I was going to copy from there but I think maybe just wait until that is fixed improve the color scale with future PRs

Screencast.from.01-09-2023.12.12.33.PM.webm

@alexrockhill
Copy link
Contributor Author

Also the blinking cursors is a very annoying PyQt bug it looks like; even when you remove focus, the cursor still continues to blink. I tried the solution here (https://stackoverflow.com/questions/42475602/pyqt5-two-blinking-cursors-at-the-same-time) but once that cursor starts blinking, I can't figure out a way to stop it. I think best just to wait for a new PyQt version to fix that...

@alexrockhill
Copy link
Contributor Author

Ok, I sped it up significantly, I fixed the spectrogram selection issue (I'm pretty sure) and I fixed the things pointed out as wrong with the baseline correction (that was a bit sloppy, sorry, it was added last minute) and fixed the keyboard shortcut issue because the focus was being set wrong.

I didn't implement a better zoom and I only implemented vmid for the volume (it's not very straightfoward to add it in matplotlib.imshow, it only accepts a vmin and vmax for the colormap). I might look into the latter before this is merged but it would also be fine as a followup PR.

@britta-wstnr
Copy link
Member

Also, re the interpolation, yes I believe that is expected as values that are neighboring NaNs messes up the interpolation. I think I should fix all the NaNs by mapping divide by zero to zero though so that would be fixed indirectly.

Where are the NaNs coming from? The masking of the data via min/mid/max? If so, should the interpolation not be done before the masking? The more I think about it, the less intuitive I find it - after all, in the example above, it masks out the extreme value. But I think you said you fixed that now in one of your other comments?

I am not sure I follow on the baseline - does this have to do with writing custom code (you mention that for the colorbar) - i.e. is the baseline operations underpinned by custom code as well?

As for the symmetric colorbar - for me it is not (also check the figures above - shows a less extreme case though).

There is quite some spelling mistakes etc in the latest code additions that Github catches - so please have a look at that and maybe check locally with a linter.

@alexrockhill
Copy link
Contributor Author

Where are the NaNs coming from? The masking of the data via min/mid/max? If so, should the interpolation not be done before the masking? The more I think about it, the less intuitive I find it - after all, in the example above, it masks out the extreme value. But I think you said you fixed that now in one of your other comments?

The NaNs come from small float values being mapped to zero, then, when you go to take the log of that, you get NaN. The interpolation does happen after the masking though because the interpolation is done in matplotlib so the mask would have to be passed as a parameter to be done after which doesn't exist in matplotlib currently.

I am not sure I follow on the baseline - does this have to do with writing custom code (you mention that for the colorbar) - i.e. is the baseline operations underpinned by custom code as well?

The baseline uses the MNE internal rescale function so it does not duplicate code. The issue with that was the same log-of-an-integer-rounded-to-zero one.

As for the symmetric colorbar - for me it is not (also check the figures above - shows a less extreme case though).

So there's two issues there: 1) the gradiometers must be combined and that uses root mean square so it's strictly positive even after baseline correction makes the raw values negative and 2) the colorbar is reset to min 0, middle halfway and max all-the-way whenever you change the baseline selector. 2 is what I implemented when a symmetric colorbar was requested. For non-gradiometers, this is exactly the ticket and does exactly what you're asking for, there's just not an easy way to do that for gradiometers because they must be combined. We should combine them with mean rather than rms but that involves an API change to plot_topomap and, in my opinion, we should do that is a separate PR to deal with the implications and standardization across plot_topomap.

There is quite some spelling mistakes etc in the latest code additions that Github catches - so please have a look at that and maybe check locally with a linter.

Those aren't mine, they came from changing the codespell testing code and I fixed them in another PR.

@alexrockhill
Copy link
Contributor Author

To demonstrate the symmetric topomap, here is a script that uses magnetometers; there is not the same issue with the colorbar not being symmetric because they don't have to be combined.

import numpy as np
import mne
from mne.datasets import somato
from mne.time_frequency import csd_tfr

data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = (data_path / 'sub-{}'.format(subject) / 'meg' /
             'sub-{}_task-{}_meg.fif'.format(subject, task))

# crop to 5 minutes to save memory
raw = mne.io.read_raw_fif(raw_fname).crop(0, 300)
raw.load_data()

picks = mne.pick_types(raw.info, meg='mag', eog=True, exclude='bads')

# read epochs
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, picks=picks,
                    preload=True, decim=3)

# read forward operator and point to freesurfer subject directory
fname_fwd = (data_path / 'derivatives' / 'sub-{}'.format(subject) /
             'sub-{}_task-{}-fwd.fif'.format(subject, task))
subjects_dir = data_path / 'derivatives' / 'freesurfer' / 'subjects'

fwd = mne.read_forward_solution(fname_fwd)

rank = mne.compute_rank(epochs, tol=1e-6, tol_kind='relative')
active_win = (0.5, 1.5)
baseline_win = (-1, 0)

# compute cross-spectral density matrices
freqs = np.logspace(np.log10(12), np.log10(30), 9)

# time-frequency decomposition
epochs_tfr = mne.time_frequency.tfr_morlet(
    epochs, freqs=freqs, n_cycles=freqs / 2, return_itc=False,
    average=False, output='complex')
epochs_tfr.decimate(20)  # decimate for speed

csd = csd_tfr(epochs_tfr, tmin=-1, tmax=1.5)
baseline_csd = csd_tfr(epochs_tfr, tmin=baseline_win[0], tmax=baseline_win[1])
ers_csd = csd_tfr(epochs_tfr, tmin=active_win[0], tmax=active_win[1])

surface = subjects_dir / subject / 'bem' / 'inner_skull.surf'
vol_src = mne.setup_volume_source_space(
    subject=subject, subjects_dir=subjects_dir, surface=surface,
    pos=10, add_interpolator=False)  # just for speed!

conductivity = (0.3,)  # one layer for MEG
model = mne.make_bem_model(subject=subject, ico=3,  # just for speed
                           conductivity=conductivity,
                           subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)

trans = fwd['info']['mri_head_t']
vol_fwd = mne.make_forward_solution(
    raw.info, trans=trans, src=vol_src, bem=bem, meg=True, eeg=True,
    mindist=5.0, n_jobs=1, verbose=True)

# Compute source estimate using MNE solver
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'MNE'  # use MNE method (could also be dSPM or sLORETA)

# make a different inverse operator for each frequency so as to properly
# whiten the sensor data
inverse_operator = list()
for freq_idx in range(epochs_tfr.freqs.size):
    # for each frequency, compute a separate covariance matrix
    baseline_cov = baseline_csd.get_data(index=freq_idx, as_cov=True)
    baseline_cov['data'] = baseline_cov['data'].real  # only normalize by real
    # then use that covariance matrix as normalization for the inverse
    # operator
    inverse_operator.append(mne.minimum_norm.make_inverse_operator(
        epochs.info, vol_fwd, baseline_cov))

# finally, compute the stcs for each epoch and frequency
stcs = mne.minimum_norm.apply_inverse_tfr_epochs(
    epochs_tfr, inverse_operator, lambda2, method=method,
    pick_ori='vector')

viewer = mne.gui.view_vol_stc(stcs, subject=subject, subjects_dir=subjects_dir,
                              src=vol_src, inst=epochs_tfr)
viewer.go_to_extreme()  # show the maximum intensity source vertex
viewer.set_cmap(vmin=0.25, vmid=0.8)
viewer.set_3d_view(azimuth=40, elevation=35, distance=300)

@alexrockhill
Copy link
Contributor Author

Ok to merge?

@britta-wstnr
Copy link
Member

Just checked the GUI again after your fixes, and for me the colorbar is still not symmetric (-2.5 to 4):
image
(also note how the "Power" next to the colorbar is cut off.)

I also came across this again, here using baseline zlogratio creates another one of the white TFRs at the extreme ... I understand your explanation of why this happens, but I also think it is not what a user of the GUI would expect. I would expect the extrema of my data to be visible.
image

@larsoner
Copy link
Member

[@drammock, long ago] ... then we could release this in mne-incubator instead, and migrate it to MNE-Python after we see how it goes with the users.

[@alexrockhill] Ok to merge?

[@britta-wstnr] Just checked the GUI again after your fixes, and for me the colorbar is still not symmetric

@alexrockhill from a high level perspective this PR does a lot of cool stuff. To do so it needs a lot of lines (+1700ish), and given the number of review iterations and the expected follow-up PRs to add new/experimental features (e.g., complex-valued phase-alignment), I'm a bit worried about the future review/maintenance bandwidth.

I propose we do a variant of what @drammock suggested earlier -- but instead of moving code to mne-incubator, we create a new mne-tfr-gui or mne-tfr-browser repo/package like we have for mne-qt-browser.

I think having the mne-qt-browser repo has worked well for people interested in committing and testing the raw data browsing, and it seems like the TFR browsing/GUI might work similarly. (Incidentally, I think this also similar to how "nutmegtrip"'s TFR browser was implemented for FieldTrip!) You could be the primary author/maintainer of this repo, and it would allow for you to move faster with more potentially experimental features. If we go this route I'm happy to help set up CIs, the repo, PyPI and conda-forge, etc. so the code transfer overhead won't be too bad.

To be honest I wish we had done something similar for mne.viz.Brain a few years ago as I think it would have made our lives much easier in a lot of ways (and gotten us more contributors for stuff like #11503 !)...

@alexrockhill
Copy link
Contributor Author

@britta-wstnr , the colorbar is symmetric in that vmin and vmax are at the extremes and vmid is exactly in the middle. It's not hard to make it symmetric so that vmin and vmax are the same absolute value, it just wasn't clear that's what you were asking for.

For the second one, that is a bug, if you toggle interpolate on and off, it gets fixed. I forgot I needed to look into that, I'll fix that asap.

@larsoner , that sounds like a pretty good idea, my only hesitancy is that core is shared with ieeg. With the seeg circle visualization shared with mne connectivity, that was really messy and more than a bit of a pain. We could move both guis over to a new repo though. What do you think?

@larsoner
Copy link
Member

We could move both guis over to a new repo though. What do you think?

Sure if that makes logistics easier then that's fine. Early next week I can set up the repo and such (unless you're keen to learn/try!) and we can move iEEG first, get green, then move this. WDYT?

@alexrockhill
Copy link
Contributor Author

Sounds good

@alexrockhill
Copy link
Contributor Author

Now the colorbar is symmetric the right way and updates properly for the topomap.

The way I was handling integers for the complex conjugate the used squares and so would overflow was causing a lot of values to be cast to zero so I fixed that as well. What was needed was casting to the next biggest data type temporarily but now I think that's unavoidable. That will cause a bump in the max amount of memory used but since it worked for a decent number of epochs on multiple pretty wimpy laptops I've tested so I think it's not too bad and will work fine.

@agramfort
Copy link
Member

agramfort commented Mar 12, 2023 via email

@britta-wstnr
Copy link
Member

+1 for putting this in its own repo.

As for the colorbar - for good measure:

when you change the baseline, the colorbar is not symmetric - and you have to navigate yourself to a point where 0 maps to white. Why not make this automatic, did we have that in a previous version even?

I also wonder if it would make sense to force the colorbar to be symmmetric when a baseline is computed on the fly - so that 0 is always white.

@alexrockhill
Copy link
Contributor Author

when you change the baseline, the colorbar is not symmetric - and you have to navigate yourself to a point where 0 maps to white. Why not make this automatic, did we have that in a previous version even?

I also wonder if it would make sense to force the colorbar to be symmmetric when a baseline is computed on the fly - so that 0 is always white.

That's the current behavior as far as I can tell. Except for combining planar gradiometers but, in that case, they can't be negative using root-mean-square so white is set to the midpoint value. Can you provide a screenshot if there's still something that's an issue?

@britta-wstnr
Copy link
Member

That's the current behavior as far as I can tell.

Yes, the latest iteration does it correctly.

@larsoner
Copy link
Member

@alexrockhill FYI I haven't forgotten about this, hope to get to the new repo stuff in the next couple of days!

@larsoner
Copy link
Member

@alexrockhill can you open a WIP PR to add this code to https://github.com/mne-tools/mne-gui-addons ? Then I can fix all the infrastructure stuff (unless you want to!) to get things green. Then once that's merged we can modify this PR just to have the tutorial that uses the GUI I think. Maybe eventually mne-gui-addons should have its own doc build and such but for now it's easy enough (and good for exposure) to have the tutorial using it live here.

@alexrockhill
Copy link
Contributor Author

@alexrockhill can you open a WIP PR to add this code to https://github.com/mne-tools/mne-gui-addons ? Then I can fix all the infrastructure stuff (unless you want to!) to get things green. Then once that's merged we can modify this PR just to have the tutorial that uses the GUI I think. Maybe eventually mne-gui-addons should have its own doc build and such but for now it's easy enough (and good for exposure) to have the tutorial using it live here.

Definitely, thanks for doing that!

@larsoner
Copy link
Member

larsoner commented Apr 3, 2023

Closing in favor of mne-tools/mne-gui-addons#5

@larsoner larsoner closed this Apr 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants