You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I generated some fake scribbles on our dataset using the ground truth labels but the pretrained model cannot predict anything meaningful while using the generate clicks and bounding boxes are fine. I am wondering how the scribbles are generated and how the generated scribbles can shift the input distribution so much that it completely confuse the network.
The text was updated successfully, but these errors were encountered:
Those scribbles look quite thick, relative to the size of the images. The random deformations in our scribble generation code vary the scribble thickness a bit but the training scribbles are rarely as wide and as sharp as these examples. I suspect the predictions will be much better with thinner scribbles and/or lightly blurring the scribble mask.
So I generated some fake scribbles on our dataset using the ground truth labels but the pretrained model cannot predict anything meaningful while using the generate clicks and bounding boxes are fine. I am wondering how the scribbles are generated and how the generated scribbles can shift the input distribution so much that it completely confuse the network.
The text was updated successfully, but these errors were encountered: