This repository provide famous adversarial attacks.
- python 3.6.1
- pytorch 1.4.0
- Explaining and harnessing adversarial example: FGSM
- Towards Evaluating the Robustness of Neural Networks: CW
- Towards Deep Learning Models Resistant to Adversarial Attacks: PGD
- DeepFool: a simple and accurate method to fool deep neural networks: DeepFool
Multi GPUs are allowed
foo@bar:.../Attack-repo$ ./attack.sh
if you want to change hyper-parameter such as attack method or epsilons for attack quality, open the attack.sh file and just change arguments.
Open the file on terminal or your favorite editor,
foo@bar:.../Attack-repo$ vim attack.sh
and change values in "Set parameters" block.
You can check description in config.py
foo@bar:.../Attack-repo$ ./visualize.sh
You can set option to save all images or just the one you have selected.
foo@bar:.../Attack-repo$ vim visualize.sh
- parameters
- normal: Set 'true' when you need save normal images. If 'false', adversarial examples will be saved
- n_rows: Number of rows in saved figure
- batch_size: Mini batch-size for torch.utils.data.DataLoader. If you don't need to compare images one by one, you can use a size as large as your GPU resource allows.
- set_idx: If you set 'true', only the image of index belonging to the indices variable.specified below will be saved. On the other hand, if set to 'false', all images will be saved.
- indices: Image indices to save.
- attack_method: 'DeepFool', 'FGSM', 'PGD', and 'DeepFool' are allowed. You have to be careful about the case.
- dataset: 'cifar10' and 'cifar100' are allowed.
Customized ResNet-based models are pre-treained on 'CIFAR-10' and 'CIFAR-100' dataset.
There are three pre-trained models on each dataset and you can download pre-trained weights from the links as follows:
- CIFAR10
- CIFAR100
The location of those files are
....|
- Attack-repo
|
- resnet
|
- pretrained_models
|
-cifar10
|
-resnet18.pth
-resnet50.pth
-resnet101.pth
-cifar100
|
-resnet101.pth