Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How were the synthetic scribbles generated during training? #8

Open
aL3x-O-o-Hung opened this issue Aug 2, 2024 · 3 comments
Open

Comments

@aL3x-O-o-Hung
Copy link

So I generated some fake scribbles on our dataset using the ground truth labels but the pretrained model cannot predict anything meaningful while using the generate clicks and bounding boxes are fine. I am wondering how the scribbles are generated and how the generated scribbles can shift the input distribution so much that it completely confuse the network.

@halleewong
Copy link
Owner

Can you show some examples?

The scribbles were generated using the code in scribbleprompt/scribbles.py during training.

@aL3x-O-o-Hung
Copy link
Author

Hi!

Thanks for the reply. Please find these examples attached.

microUS_test_01_positive_slice_11
microUS_test_01_positive_slice_16

@halleewong
Copy link
Owner

halleewong commented Aug 5, 2024

Those scribbles look quite thick, relative to the size of the images. The random deformations in our scribble generation code vary the scribble thickness a bit but the training scribbles are rarely as wide and as sharp as these examples. I suspect the predictions will be much better with thinner scribbles and/or lightly blurring the scribble mask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants