An easy to use simple adversarial attack tool
Explore the docs »
View Demo
· Report Bug
· Request Feature
Table of Contents
There are many great web interface to try adversarial attacks available on GitHub; however, I didn't find one that really suited my needs so I created this one.
Key features:
- Simple examples to get started
- Multiple type of Adv Attacks available
- Usage of custom model (vision only) (in progress)
If you miss any type of Adv Attack please consider to fork this repo and to create a pull request or to open an issue.
To get a local copy up and running follow these simple example steps.
Make sure to use python3.9.X, torch is currently not supported for python 3.10 and native tuple type hinting has been introduced since python3.9.
Then you only need to install the python dependencies : python3 -m pip install -r requirements.txt
To start the server : python src/main
To start you simply have to select the attack you want to proceed among : FGSM, TFGSM, BIM and TBIM
Then select or drag & drop the image on which you want to apply the attack and select the parameters for the attack.
TODO: describe every param
epsilon
: todo
alpha
: todo
iterations
: todo
One-step gradient-based method. Do not use alpha
, target
and iterations
.
The attack is remarkably powerful, and yet intuitive. It is designed to attack neural networks by leveraging the way they learn, gradients. The idea is simple, rather than working to minimize the loss by adjusting the weights based on the backpropagated gradients, the attack adjusts the input data to maximize the loss based on the same backpropagated gradients. In other words, the attack uses the gradient of the loss w.r.t the input data, then adjusts the input data to maximize the loss.
source: https://pytorch.org/tutorials/beginner/fgsm_tutorial.html
perturbation = image + epsilon * sign(grad)
Original paper: Explaining and Harnessing Adversarial Examples
FGSM algorithm with target label.
Do not use alpha
and iterations
.
Prety much the same as FGSM but instead of using the gradient of the loss w.r.t the input data, it uses the gradient of the loss w.r.t the target label.
perturbation = image - epsilon * sign(grad)
Iterative FGSM algorithm. Do not use target
.
Where
The step size is usually set to $\epsilon / T \leq \alpha \leq \epsilon $ where
source: Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems
BIM algorithm with target label.
- Support more attacks
- Carlini & Wagner
- Deepfool
- Limited-memory Broyden-Fletcher-Goldfarb-Shanno
- Jacobian-based Saliency Map
- Add Changelog
- Custom Model Support
See the open issues for a full list of proposed features and known issues.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/my-feature
) - Commit your Changes (
git commit -m 'feat: my new feature
) - Push to the Branch (
git push origin feature/my-feature
) - Open a Pull Request
Please try to follow Conventional Commits.
Distributed under the MIT License. See LICENSE.txt
for more information.
Valentin De Matos - @ThytuVDM - valentin.de-matos@epitech.eu
Project Link: https://github.com/Thytu/OpenAdv