No.5 solution to non-targeted attack in IJCAI-2019 Alibaba Adversarial AI Challenge (AAAC 2019))
IJCAI-2019 Alibaba Adversarial AI Challenge (AAAC 2019)): https://tianchi.aliyun.com/competition/entrance/231701/introduction
We attend IJCAI-2019 Alibaba Adversarial AI Challenge, and get the 5th place in the non-targted attack track.
Our method is gradiend-based attack method.
I use lots of tricks to improve the attack ability and transferability.
Currently only the scripts are released.
- attack_tijiao2.py: Main script for attack.
- test_search.py: script for test the attack method
- gen_attack.py: script to generate adversarial data for following training
- train_ads.py: script to train adversarial model.
Python 3
pytorch 0.4 +
other necessary package used in script
To attack a model and generate adversarial images.
python attack_tijiao2.py --input_dir=/path/to/your/input_images --output_dir=/path/to/your/output_dir
You need to replace the pretrained weights in the attack_tijiao2.py, and place the dev.csv in the input_images.
To test the adversarial image
python test_search.py --input_dir=/path/to/your/input_images --output_dir=/path/to/your/output_dir --if_attack=0
To search for parameters of attacking, you can use the script test_search.py
To attack model, you need pretrained model weight for the dataset.
You need to put your weight in the right dir according to the path in attack_tijiao.py.
You can find all the tricks in the attack_tijiao2.py
Our method is gradient-based attack method.
Thanks to previous work, our method based on Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. And we add lots of our tricks, and I believe they do work.
- Iterative gradient ascend. (Loss function is CrossEntropyLoss)
- Add Gaussian kernel convolution (Key point in the paper )
- Add input diversity (resize and padding for picture) (It seems it doesn't work sometimes)
- Add Class Activation Map Mask for noise.
- Add Reverse Cross Entropy Loss to the original Loss function.
- Multiply pixel norm of noise to noise
- Ensemble model, and apply different weight for different models according to the model prediction during attack iterations.
- Just make the noise in the edge equal to zero (may work)
I'm not sure these tricks always work, I also test in imagenet(NIPS 2017 adversarial competition test dataset). But the result still not clear.
Jiang Yangzhou jiangyangzhou@sjtu.edu.cn