You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for sharing the code and dataset information related to GIAS.
I tried to reproduce inversion attack results using the script provided as a default on ImageNet dataset (Filename : ours_vw_bs4.sh).
However, the inverted images were somewhat different from the ground truth, but the images seem to be natural probably thanks to better image prior induced by GAN. Reconstruction loss during training was not that high, seems normal to me.
Should I customize hyperparameters in the script file for better result? I use ImageNet images though.
Could you give some suggestions for me? I tried FFHQ dataset also with the same script, but the inverted images were far from ground truth images.
The text was updated successfully, but these errors were encountered:
Hello, hong
I got the same issues as you when I tried to reproduce stylegan2 on FFHQ128. The genetared images are very unrealistic.
Could you please tell me have you solved this problem? I would be grateful if you can give me some advice.
First of all, thank you for sharing the code and dataset information related to GIAS.
I tried to reproduce inversion attack results using the script provided as a default on ImageNet dataset (Filename : ours_vw_bs4.sh).
However, the inverted images were somewhat different from the ground truth, but the images seem to be natural probably thanks to better image prior induced by GAN. Reconstruction loss during training was not that high, seems normal to me.
Should I customize hyperparameters in the script file for better result? I use ImageNet images though.
Could you give some suggestions for me? I tried FFHQ dataset also with the same script, but the inverted images were far from ground truth images.
The text was updated successfully, but these errors were encountered: