You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With higher identity loss, the image translation becomes more conservative, so it makes less changes. I tried using values like 0.1, 0.5, 1 and 10, and the result is not so different. I believe this is because the generator figures out essentially two different translation functions by detecting whether the input image is from the source dataset or the target dataset.
How does the
identity weight
affect the outcome?For the current code, you gave a weight of
0.5
Did you try this when you gave it
1
? What's the difference?The text was updated successfully, but these errors were encountered: