You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm having the same problem as you. Even though I use the pre-trained model provided by the author, the resulting new perspective image is still very different from the input image. Have you solved this problem?
The image is cropped to conform to the cropping standard of the celebA dataset.
Hello, I noticed that images in the img_align_celeba dataset are aligned and cropped.
The above is the result of my testing using the pre trained model provided by the author. The face in the test image is centered, with a size of 128 * 128, but the generated image is still not very similar, and the eyes are always looking down. Is it a problem with the input image?
I tested using images from the ffhq dataset(128*128) ,and found that the two individuals are similar and had normal eye angles,so it should not be a problem with image size. So what specific requirements do the input dataset need to meet? Looking forward to your reply, thank you!
the source image:
the new view:
the reconstruction:
the hybrid optimization:
Does anyone know how to improve it
The text was updated successfully, but these errors were encountered: