You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I try to use composition loss in my training.
And then, I find the merged img's color is different from image.
Why the color jitter is just be used on fg? Is it the reason for color difference?
By the way, I find regression_loss just use unkonwn area.
It will be work well when the composition loss used in unknowd area?
The text was updated successfully, but these errors were encountered:
It is because there are 100k+ bg images in COCO but only 400+ fg. And some of the jitters on merged images will probably be neutralized by batchnorm.
In our codes the composition loss is calculated in unknown area as proposed in the paper of Deep Matting. Since the composition loss is not a part of our method, we did not conduct many experiments on it.
I try to use composition loss in my training.
And then, I find the merged img's color is different from image.
Why the color jitter is just be used on fg? Is it the reason for color difference?
By the way, I find regression_loss just use unkonwn area.
It will be work well when the composition loss used in unknowd area?
The text was updated successfully, but these errors were encountered: