Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change spacing cause ground truth mask information loss? #21

Open
yuling-luo opened this issue Jan 10, 2024 · 3 comments
Open

change spacing cause ground truth mask information loss? #21

yuling-luo opened this issue Jan 10, 2024 · 3 comments

Comments

@yuling-luo
Copy link

yuling-luo commented Jan 10, 2024

I been experimenting different spacing and find something interesting when i tried to plot the ground truth against the input image.
I have three class of masks: background, A, B. B is always within region A.

I modified the 'plot inference' function in the notebook and tried to visualise the ground truth against the image since in my case the annotation overlap with each other. And called it with fig = plot_inference(x_batch[0][0],y_patch[0][0],x_batch[1][0],y_patch[0][1], x_batch[2][0], y_patch[0][2])

I firstly tested it with spacing [0.5, 2, 8], it shows A region with pink colour, and also shows background as black. But it omit the B region completely.
Then I tested it with spacing [1,4,16], I received a user warning saying that spacing 16 outside margin. However, using this spacing, I can see black, red, pink, which align well with background, A, and B.

Did I miss something in here? What might cause this issue?

Thank you!

@martvanrijthoven
Copy link
Collaborator

Dear Yuling Luo,

This is indeed strange, can you share the notebook code and the output plots, maybe it gives me an insight what goes wrong?

Best wishes,
Mart

@yuling-luo
Copy link
Author

Dear Yuling Luo,

This is indeed strange, can you share the notebook code and the output plots, maybe it gives me an insight what goes wrong?

Best wishes, Mart

Hi Mart,

Please see my config

wholeslidedata:
    default:
        seed: 123
        yaml_source: /local/configs/data.yml

        annotation_parser@replace(true): 
            "*object": src.dataset.parser.OMEXMLAnnotationParser
        
        labels: 
            vessel boundary: 1
            lesion: 2

        batch_shape:
            batch_size: 1
            shape: [[1244,1244,3],[1244,1244,3], [1244,1244,3]]
            spacing: [1, 4, 16]
            y_shape: [3, 1030, 1030]

        sample_callbacks:
            - "*object": wholeslidedata.samplers.callbacks.CropSampleCallback
              output_shape: [1030, 1030]
        
        batch_callbacks:
            - "*object": wholeslidedata.interoperability.albumentations.callbacks.AlbumentationsSegmentationBatchCallback
              augmentations:
                #- RandomRotate90:
                #    p: 0.4
                #- Flip:
                #    p: 0.4
                #- GridDistortion:
                #    p: 1.0
                - HueSaturationValue:
                    hue_shift_limit: 0.2
                    sat_shift_limit: 0.3
                    val_shift_limit: 0.2
                    p: 0.5

And the plot inference function:

def plot_inference(patch_1, gt_1, patch_2, gt_2, patch_3, gt_3, prediction):
    colors = ['black', 'red', 'pink']
    colors = ['black', 'red', 'pink']
    fig, axes = plt.subplots(1,7, figsize=(10,10))
    #ipdb.set_trace()
    axes[0].imshow(patch_1.permute(1,2,0))
    plot_mask(gt_1, axes=axes[1], color_values=colors)
    axes[2].imshow(patch_2.permute(1,2,0))
    plot_mask(gt_2, axes=axes[3], color_values=colors)
    axes[4].imshow(patch_3.permute(1,2,0))
    plot_mask(gt_3, axes=axes[5], color_values=colors)
    plot_mask(prediction, axes=axes[6], color_values=colors)
    #plt.show()
    return fig

Unfortunately I can't share the plot as it contains sensitive data, sorry about that : (

@martvanrijthoven
Copy link
Collaborator

Dear Yuling Luo,

I see i understand you can not share the data. However would it be possible to share privately with me a single annotation file without the image file and your custom parser. Then I can do some experimentation with the mask generation and see what goes wrong.

Best wishes,
Mart

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants