-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MRG, ENH: Volume rendering #8064
Conversation
Actually @GuillaumeFavelier if you want a more realistic image, use an oct6 surface and spacing=7. volume, these are more like what people would actually use. The example says the spacing higher just for speed. |
I would be glad to help with this. I'm not sure what you mean by "oct6 surface" though. Also we have a |
Not relevant, it's an orthoview / slice viewer in matplotlib. We could do something like this in VTK as well but it would be an entirely different function and probably only for volumes. Oct6 refers to the argument to setup_source_space, it determines how densely the mesh is sampled |
I just tried and personally I find it hard to see where the volume rendered
activity maps to in the brain.
the orthoview is much easier to use to see where things happen and it can
make publication ready figures
while here I would say no as it is.
grumpy Alex
|
Indeed it looks bad currently, but I haven't tried at all to make it look good. Be patient :) FWIW MIP (maximum intensity projection) mapping is commonly used -- even in stc_vol.plot() -- and it's useful, so I think it's likely we can come up with something that's helpful here. |
please prove me wrong :)
… |
Pushed a commit to enable maximum intensity projection, now I find it pretty easy to see where the activity comes from |
(still a lot of work to do to make it prettier and work better, but it should at least give you some hope for the future instead of despair :) ) |
Okay I find it much more usable / understandable now. Feel free to try again @agramfort. Updated example here, with updated image (looks better with pos=7. or smaller, this uses 10.): |
Updated the existing example: And added volume-only mode and now use it the LCMV example (cc @britta-wstnr ): |
ok it does look nice... ;)
… |
😍 |
Fixed two-sided volumes/data: I think I just need to add some tests, fix the failing test (shouldn't add lh spheres to rh-only views and vice-versa), and then add tests to try to cover these new lines. The sticking point will be figuring out if we can re-use actors and meshes in different renderers. @GuillaumeFavelier any ideas? Do we need to use vtkAssembly or something? I'm tempted just to try adding actors to multiple renderers/views to see if it works. |
In my experiments, just adding an existing I would say that the safe route is to share the mesh itself. AFAIK, |
Yes, let's do this 👍 |
I'm seeing segfaults on a lot of runs but not locally. Can you replicate? |
Segfaults replicated -- it was from accidentally |
I would like to test myself before we merge. Can you tell me when you have won against CIs? |
|
(codecov just complains that the patch diff is only 93% and the target is 95, which I think is okay. But I might push another commit just to see if I can touch a few more of the lines) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I will move the "naked VTK" part of add_volume_object in the pyvista backend in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have time to look in details but it works great on my machine.
@GuillaumeFavelier and @drammock I'll let you comment and merge if all good for you.
@GuillaumeFavelier already gave +1 for merge so I'll go ahead and merge |
Closes #7648
set_time_point
like everything else)set_time_point
should fix this)stc.plot
hemi='split'
with vectors, actors, etc.offset
/surface
issues, esp. whenhemi='split'
and/orhemi='lh' / 'rh'
onlyshow_traces=<float>
support to specify the fraction to dedicate to the time viewercolor_tf
directly?)@GuillaumeFavelier I think I have this at a point where I've worked out the transformations and data injection:
If you
mne.minimum_norm.write_inverse_operator('mixed-inv.fif', inverse_operator)
the inv operator from the modifiedplot_mixed_source_space_inverse.py
, you can iterate with this much quicker script:Can you take over and fit this into the
_Brain
+ PyVista framework better? I have a checklist at the top for how I think it should go but feel free to update and change and push commits as you want.