Skip to content

A pytorch implementation of "Adversarial Examples in the Physical World"

License

Notifications You must be signed in to change notification settings

Harry24k/AEPW-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AEPW-pytorch

A pytorch implementation of "Adversarial Examples in the Physical World"

Summary

This code is a pytorch implementation of basic iterative method(known as I-FGSM) and iterative least-likely class method.
In this code, I used above methods to fool Inception v3.
'Giant Panda' used for an example.
You can add other pictures with a folder with the label name in the 'data/imagenet'.

Also, this code shows adversarial attack is possible with a photo if an adversarial image.

Requirements

  • python==3.6
  • numpy==1.14.2
  • pytorch==1.0.0

Important results not in the code

  • This paper proposed new metric(called destruction rate) to measure the influence of arbitrary trasnfromations. (p.6)
    • Destruction rate is the fraction of adversarial images which are no longer misclassified after the transformations.
    • Adversarial examples generated by the FGSM are the most robust to transformations.
    • Iterative least-likely method is the least robust.
    • Blur, noise and JPEG encoding have a higher destruction rate than brightness and contrast.
  • Paper shows that, to obtain very high confidence, iterative methods are weak to survive photo transformation. (p.8-9)
    • prefiltered case(clean image correctly classified, adversarial image confidently incorrectly classified) couldn't fool the model after transformations unlike average case(randomly chosen images)

Notice

About

A pytorch implementation of "Adversarial Examples in the Physical World"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published