Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MRG, ENH: Volume rendering #8064

Merged
merged 26 commits into from
Aug 5, 2020
Merged

MRG, ENH: Volume rendering #8064

merged 26 commits into from
Aug 5, 2020

Conversation

larsoner
Copy link
Member

@larsoner larsoner commented Jul 27, 2020

Closes #7648

  • Vector initialization
  • Vector updates with time-clicking and clims
  • Volume initialization (should be unified / not rely on vector-ness, just happen in set_time_point like everything else)
  • Volume updates (moving scalar setting to set_time_point should fix this)
  • Volume updates with time-clicking
  • Actually make it look good
  • Make it work for non-vector stc.plot
  • Volume-only plotting
  • Fix bug with pos_lims for volumetric plotting
  • Fix bug with volumetric plotting on macOS + VTK9 (BUG: OSX volume rendering broken with VTK9 + MultiSamples pyvista/pyvista#850)
  • Add colorbar for volume-only (needs appropriate actor, which we don't have for vol -- maybe fake one?)
  • Add terrain interaction
  • Volume updates with clims (currently requires changing the time point for some reason)
  • Fix bugs with multi-views and hemi='split' with vectors, actors, etc.
  • Deal with offset / surface issues, esp. when hemi='split' and/or hemi='lh' / 'rh' only
  • Add GFP
  • Add picking of max point from the volumes like we have for left/right hemi already
  • Clicking on volume: needs to be maximum along the ray
  • Clicking: needs to remove any spheres in ray path, if no spheres, then add new points
  • Add horizontal layout
  • Add QSplitter for configurable sizing
  • Add show_traces=<float> support to specify the fraction to dedicate to the time viewer
  • Fix actor transfer mapping for volume-only case (maybe use the color_tf directly?)
  • Divergent volume
  • Fix existing broken test
  • Sort out how to add actors to scenes (do we need different meshes? different actors? neither?)
  • Add tests for point picking of volumes and mixed
  • Add tests for divergent, view_layout, volume_options

@GuillaumeFavelier I think I have this at a point where I've worked out the transformations and data injection:

Screenshot from 2020-07-27 17-02-27

If you mne.minimum_norm.write_inverse_operator('mixed-inv.fif', inverse_operator) the inv operator from the modified plot_mixed_source_space_inverse.py, you can iterate with this much quicker script:

import os.path as op
import mne
from mne.minimum_norm import read_inverse_operator, apply_inverse

# Set dir
data_path = mne.datasets.sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
                          baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
inv = read_inverse_operator('mixed-inv.fif')
stc_vec = apply_inverse(evoked, inv, pick_ori='vector')
with mne.viz.use_3d_backend('pyvista'):
    stc_vec.plot(views='rostral', hemi='both', src=inv['src'],
                 trans=fname_trans, initial_time=0.180,
                 subjects_dir=subjects_dir)

Can you take over and fit this into the _Brain + PyVista framework better? I have a checklist at the top for how I think it should go but feel free to update and change and push commits as you want.

@larsoner
Copy link
Member Author

Actually @GuillaumeFavelier if you want a more realistic image, use an oct6 surface and spacing=7. volume, these are more like what people would actually use. The example says the spacing higher just for speed.

@GuillaumeFavelier
Copy link
Contributor

I would be glad to help with this. I'm not sure what you mean by "oct6 surface" though. Also we have a plot_volume_source_estimates() function, is there a way to reuse that in this case or it's not relevant?

@larsoner
Copy link
Member Author

Not relevant, it's an orthoview / slice viewer in matplotlib. We could do something like this in VTK as well but it would be an entirely different function and probably only for volumes.

Oct6 refers to the argument to setup_source_space, it determines how densely the mesh is sampled

@agramfort
Copy link
Member

agramfort commented Jul 28, 2020 via email

@larsoner
Copy link
Member Author

I just tried and personally I find it hard to see where the volume rendered activity maps to in the brain.

Indeed it looks bad currently, but I haven't tried at all to make it look good. Be patient :)

FWIW MIP (maximum intensity projection) mapping is commonly used -- even in stc_vol.plot() -- and it's useful, so I think it's likely we can come up with something that's helpful here.

@agramfort
Copy link
Member

agramfort commented Jul 28, 2020 via email

@larsoner
Copy link
Member Author

Pushed a commit to enable maximum intensity projection, now I find it pretty easy to see where the activity comes from

@larsoner
Copy link
Member Author

(still a lot of work to do to make it prettier and work better, but it should at least give you some hope for the future instead of despair :) )

@larsoner
Copy link
Member Author

Okay I find it much more usable / understandable now. Feel free to try again @agramfort. Updated example here, with updated image (looks better with pos=7. or smaller, this uses 10.):

@larsoner
Copy link
Member Author

Updated the existing example:

And added volume-only mode and now use it the LCMV example (cc @britta-wstnr ):

@agramfort
Copy link
Member

agramfort commented Jul 29, 2020 via email

@larsoner
Copy link
Member Author

Example 1:

Example 2:

@drammock
Copy link
Member

😍

@larsoner
Copy link
Member Author

Fixed two-sided volumes/data:

I think I just need to add some tests, fix the failing test (shouldn't add lh spheres to rh-only views and vice-versa), and then add tests to try to cover these new lines.

The sticking point will be figuring out if we can re-use actors and meshes in different renderers. @GuillaumeFavelier any ideas? Do we need to use vtkAssembly or something? I'm tempted just to try adding actors to multiple renderers/views to see if it works.

@GuillaumeFavelier
Copy link
Contributor

The sticking point will be figuring out if we can re-use actors and meshes in different renderers

In my experiments, just adding an existing vtkActor to a different vtkRenderer of the same vtkRenderWindow works but from my research in #8018, this could lead to unexpected behaviours (maybe inconsistencies while removing or modifying?) and the recommended plan of action is to let each renderer manage its own list of actors and mappers.

I would say that the safe route is to share the mesh itself.

AFAIK, vtkAssembly would help manage the properties of those actors but not solve the duplication of the underlying data.

@larsoner larsoner added this to the 0.21 milestone Jul 31, 2020
@GuillaumeFavelier
Copy link
Contributor

Yes, let's do this 👍

@larsoner
Copy link
Member Author

I'm seeing segfaults on a lot of runs but not locally. Can you replicate?

@larsoner
Copy link
Member Author

larsoner commented Aug 1, 2020

Segfaults replicated -- it was from accidentally _TimeViewer(brain)ing twice. Added a check to ensure it won't happen again, hopefully that fixes the segfaults along the way...

@agramfort
Copy link
Member

I would like to test myself before we merge. Can you tell me when you have won against CIs?

@larsoner
Copy link
Member Author

larsoner commented Aug 1, 2020

Ultraslow run failure was spurious, restarted. Should be ready for you @agramfort

@larsoner
Copy link
Member Author

larsoner commented Aug 1, 2020

(codecov just complains that the patch diff is only 93% and the target is 95, which I think is okay. But I might push another commit just to see if I can touch a few more of the lines)

Copy link
Contributor

@GuillaumeFavelier GuillaumeFavelier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I will move the "naked VTK" part of add_volume_object in the pyvista backend in another PR.

@larsoner larsoner mentioned this pull request Aug 3, 2020
7 tasks
Copy link
Member

@agramfort agramfort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have time to look in details but it works great on my machine.

@GuillaumeFavelier and @drammock I'll let you comment and merge if all good for you.

@larsoner
Copy link
Member Author

larsoner commented Aug 5, 2020

@GuillaumeFavelier already gave +1 for merge so I'll go ahead and merge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ENH: Add volumetric rendering to _Brain
4 participants