-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Backward Smoothing #26
Comments
Hi, thanks for the submission! I ran the evaluation with Linf-bound CIFAR-10 CIFAR-100 which seem to me in line with what reported in the paper. If this is the case, I'd be happy to add them! |
The numbers are correct. Thank you very much! |
Added, thanks again for the submissions! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Paper: Efficient Robust Training via Backward Smoothing https://arxiv.org/abs/2010.01278
Venue: {if applicable, the venue where the paper appeared}
Dataset and threat model: CIFAR-10/CIFAR100, Linf, 8/255
Code: https://github.com/jinghuichen/AutoAttackEval
Pre-trained model: https://drive.google.com/file/d/1lvMa2rbMrIVkAqsyrs_YXLBhewZBfdkP/view?usp=sharing (CIFAR10)
https://drive.google.com/file/d/1xNhK4w5ZuUSfbD_WR4xFKTprojaVux1A/view?usp=sharing (CIFAR100)
Log file: {link to log file of the evaluation}
Additional data: no
Clean and robust accuracy: CIFAR10 clean 85.32 robust 54.94 CIFAR100 clean 62.15 robust 31.92
Architecture: {wideresnet-34-10}
Description of the model/defense: Efficient robust training via backward smoothing
Thanks
The text was updated successfully, but these errors were encountered: