You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Yingpeng, it is my honor to read your paper:Translation-Invariant attacks. I have tried your idea in my own task: face recognition, to generate some adversarial examples in a black-box manner, but I find the mean value of distance between the original pictures and the corresponding perturbed picture become smaller with the bigger kernel_size which is contrary to your result in the paper, I have not solve this problem yet. Would you mind giving me some suggestions to analysis this phenomenon? Thanks.
Best regards,
Looking forward to you reply.
The text was updated successfully, but these errors were encountered:
Hi,
Sorry for the late reply. What is the threat model used in your experiments? If you choose the L-infty norm threat model, after you apply the sign gradient, the distance should be similar to that without translation-invariance. Or you may try using smaller kernel size.
Best,
Yinpeng
在 2020年12月15日,下午7:53,Suqingyong ***@***.***> 写道:
Hi Yingpeng, it is my honor to read your paper:Translation-Invariant attacks. I have tried your idea in my own task: face recognition, to generate some adversarial examples in a black-box manner, but I find the mean value of distance between the original pictures and the corresponding perturbed picture become smaller with the bigger kernel_size which is contrary to your result in the paper, I have not solve this problem yet. Would you mind giving me some suggestions to analysis this phenomenon? Thanks.
Best regards,
Looking forward to you reply.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#13>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AC2ZQAM3XT4NI6OMRV4RQYTSU5E45ANCNFSM4U4HD2FA>.
Hi Yingpeng, it is my honor to read your paper:Translation-Invariant attacks. I have tried your idea in my own task: face recognition, to generate some adversarial examples in a black-box manner, but I find the mean value of distance between the original pictures and the corresponding perturbed picture become smaller with the bigger kernel_size which is contrary to your result in the paper, I have not solve this problem yet. Would you mind giving me some suggestions to analysis this phenomenon? Thanks.
Best regards,
Looking forward to you reply.
The text was updated successfully, but these errors were encountered: