- COVID-CT
- ImageNet
-
Juputer Notebook is highly recommended as you can easily show the experiment result.
pip3 install jupyter
-
We have tested our code under the following settings:
Python TensorFlow CUDA cuDNN 2.7 1.3 8.0 5.1 3.6 1.13 10.0 7.5
-
Start collecting non-targeted adversarial samples to fool DNNs
python ./attacks/exp1-non-targeted_attacks_collection.py 1 or exp1_non-targeted_data_collection.ipynb
num of pixel
: you can input any number of pixels to attack DNNs. -
Start collecting targeted adversarial samples to fool DNNs
exp3_targeted_data_collection.ipynb
- The adversarial samples and clean images also can be downloaded at Google Drive
-
Download the adversarial samples from Google Drive or generate it from the code.
-
Using our defense system to restore the input images before given to DNNs.
run exp5_our_defense.ipynb
-
Testing the Top-1 accuracy after applying our defense technique.
run exp6_DNNs_reclass.ipynb
-
Download the 1000 Imagenet dataset from the following link: https://www.kaggle.com/benhamner/adversarial-learning-challenges-getting-started/data
-
Start collecting non-targeted adversarial samples to fool Inception-V3
run exp4_imageNet_data_collection.ipynb
-
Using our defense system to restore the input images before given to DNNs.
run exp5_our_defense.ipynb
-
The adversarial samples and clean images also can be downloaded at Google Drive
-
Testing the Top-1 accuracy after applying our defense technique.
run exp7_imageNet_reclass.ipynb
We proposed SDR and EDR metric to compare with different baselines defense performance.
- Success Defense Rate(SDR) - the percentage of successful defense after applying defense techniques to the adversarial samples. Where a is the number of successful defense samples, and N1 is the total number of adversarial samples. The higher the score is, the better the defense performance is.
- Error Defense Rate (EDR) - the percentage of error defense images, which are clean but suffering from misclassification after applying the defense method. Where b is the number of error defense images, and N2 indicates total number of clean images. A lower score indicates better defense performance.
- Top-1 Accuracy - the Top-1 accuracy indicates that the results of the highest probability model answer are the expected results, such that
If you use this middleware or benchmark in your research, please cite the paper and the extended version will be submitted to a Journal.
@article{chen2022stpd,
title={STPD: Defending against ℓ0-norm attacks with space transformation},
author={Chen, Jinlin and Cao, Jiannong and Liang, Zhixuan and Cui, Xiaohui and Yu, Lequan and Li, Wei},
journal={Future Generation Computer Systems},
volume={126},
pages={225--236},
year={2022},
publisher={Elsevier}
}
- The one-pixel attack (OPA) for adversarial samples collection is from one-pixel-attack-keras
- The JSMA attack for adversarial samples collection is from Adversarial Robustness Toolbox
- The CW-L0-norm attack for adversarial samples collection is from nn_robust_attacks