Multi-label segmentation by segmentation model #1726
Replies: 3 comments 6 replies
-
In your screenshot above the image spacing is incorrect. Did you resample all the input images to have the same resolution? You can also improve segmentation quality by segmenting more structures. You don't need to do it manually, but use pretrained models, such as whole head and upper body segmentation models of 3D Slicer's MONAIAuto3DSeg extension and TotalSegmentator extension. If you can release your segmentations publicly then @diazandr3s may be able to help with the training and distributing via MONAI Auto3DSeg. |
Beta Was this translation helpful? Give feedback.
-
Hi @saeidsh1370, It seems an issue related to the indexes associated to each segment in the annotated dataset. Can you please expand more on this?
How did you create these annotations?
Yes, it is feasible. I'd start with an ROI of 96x96x96
Can you please comment on how did you make sure the label indexes are consistent among the training set?
Which single region/segment did you use to verify this?
You could update the default pretransforms here: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/segmentation.py#L75-L95 |
Beta Was this translation helpful? Give feedback.
-
Thanks, @lassoan, for your prompt answer. All CT images in my dataset have the same resolution. Also, thanks, @diazandr3s, for your comment.
Thank you in advance |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
I am currently working on segmenting various organs in Head and Neck CT images (approximately 25 organs). I have 30 labeled CT datasets (512x512 resolution). I modified the label names in the segmentation.py file and set use_pretrained_model to false. Despite using high epochs (1000 to 3000) with target spacing 3 and ROI size (96, 96, 96), my accuracy remains below 10% for both SegResNet and UNet networks (After experimenting with different spacings and ROI sizes, these values have shown the best results so far). However, I am encountering several issues:
1- Since I aim to segment tissues of varying sizes (from optic nerves to cerebrum and lungs), what would be the optimal ROI size and spacing? Is it feasible to segment all organs accurately with a constant size for these parameters?
2- The predicted labels are often incorrect and seem random, e.g., the brain is labeled as lung or muscle as thyroid. I've verified that the label numbers are consistent across my training datasets. What could be causing this issue?
3- Even when segmenting a single label, the accuracy is below 30% after training on 30 datasets for 100 epochs (This issue also persists when using the spleen dataset with the segmentation.py model without a pre-trained model). How many images are typically required to achieve high accuracy for single-label segmentation like spleen size?
4- How can I implement affine transformations in main.py to augment my dataset and increase the number of training samples?
Any insights or suggestions on these issues would be greatly appreciated. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions