Region-based formulation of the task #773
hermancollin
started this conversation in
Ideas of new features
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This comes to mind because of a recent conversation with a collaborator. For context, they have images with "loose" myelin where the myelin masks sometimes contain holes.
Our current models produce the same information twice: the boundary between the axon and its myelin sheath is predicted in both the axon and myelin masks. This does not affect morphometrics too much because of the above discussion, but it does affect the flow of information during training and over-complicates the task.
For the case of nnUNet, this is not directly an issue because all annotations are given in a single flattened mask (i.e. background = 0, axon = 2, myelin = 1). So during training and at inference time, there is only 1 border between the 2 classes. However, we should still try region-based training (see this post) and compare results.
For SAM, the way I'm currently working does suffer from this issue: the axon and myelin masks both contain this boundary. What I'm realizing is that this is overcomplication of the task has repercussions on the final model. I should simply try to output 2 region masks. The axon mask would be unchanged but the "myelin" mask would now be more of a "whole fiber" mask. The myelin mask could simply be postprocessed by subtracting the axon mask from the nerve fiber mask.
Beta Was this translation helpful? Give feedback.
All reactions