Skip to content
/ OpenAdv Public

An easy to use tool to apply adversarial attacks

License

Notifications You must be signed in to change notification settings

Thytu/OpenAdv

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

OpenAdv

An easy to use simple adversarial attack tool
Explore the docs »

View Demo · Report Bug · Request Feature


Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

About The Project

There are many great web interface to try adversarial attacks available on GitHub; however, I didn't find one that really suited my needs so I created this one.

Key features:

  • Simple examples to get started
  • Multiple type of Adv Attacks available
  • Usage of custom model (vision only) (in progress)

If you miss any type of Adv Attack please consider to fork this repo and to create a pull request or to open an issue.


Demo OpenAdv Simle

(back to top)

Built With

(back to top)

Getting Started

To get a local copy up and running follow these simple example steps.

Make sure to use python3.9.X, torch is currently not supported for python 3.10 and native tuple type hinting has been introduced since python3.9.

Then you only need to install the python dependencies : python3 -m pip install -r requirements.txt

To start the server : python src/main

(back to top)

Usage

To start you simply have to select the attack you want to proceed among : FGSM, TFGSM, BIM and TBIM

Then select or drag & drop the image on which you want to apply the attack and select the parameters for the attack.

TODO: describe every param epsilon : todo alpha : todo iterations : todo

FGSM (Fast Gradient Sign Method)

One-step gradient-based method. Do not use alpha, target and iterations.




The attack is remarkably powerful, and yet intuitive. It is designed to attack neural networks by leveraging the way they learn, gradients. The idea is simple, rather than working to minimize the loss by adjusting the weights based on the backpropagated gradients, the attack adjusts the input data to maximize the loss based on the same backpropagated gradients. In other words, the attack uses the gradient of the loss w.r.t the input data, then adjusts the input data to maximize the loss.
source: https://pytorch.org/tutorials/beginner/fgsm_tutorial.html

perturbation = image + epsilon * sign(grad)

Original paper: Explaining and Harnessing Adversarial Examples

TFGSM (Targeted Fast Gradient Sign Method)

FGSM algorithm with target label. Do not use alpha and iterations.

Prety much the same as FGSM but instead of using the gradient of the loss w.r.t the input data, it uses the gradient of the loss w.r.t the target label.

perturbation = image - epsilon * sign(grad)

BIM (Basic Iterative Method)

Iterative FGSM algorithm. Do not use target.

$x^t = (x^{t-1} + \alpha * sign(grad))$
Where $\alpha$ is the step size and $x^t$ is the adversarial image at time $t$.
The step size is usually set to $\epsilon / T \leq \alpha \leq \epsilon $ where $T$ is the number of iterations.
source: Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems

TBIM (Targeted Basic Iterative Method)

BIM algorithm with target label.

(back to top)

Roadmap

  • Support more attacks
    • Carlini & Wagner
    • Deepfool
    • Limited-memory Broyden-Fletcher-Goldfarb-Shanno
    • Jacobian-based Saliency Map
  • Add Changelog
  • Custom Model Support

See the open issues for a full list of proposed features and known issues.

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/my-feature)
  3. Commit your Changes (git commit -m 'feat: my new feature)
  4. Push to the Branch (git push origin feature/my-feature)
  5. Open a Pull Request

Please try to follow Conventional Commits.

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Valentin De Matos - @ThytuVDM - valentin.de-matos@epitech.eu

Project Link: https://github.com/Thytu/OpenAdv

(back to top)

Acknowledgments

(back to top)

About

An easy to use tool to apply adversarial attacks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published