Skip to content

benediktstroebl/Adversarial-Machine-Learning-Tutorial

Repository files navigation

Adversarial Machine Learning Tutorial

This tutorial provides users with a fundamental understanding for Adversarial Machine Learning (AML), why it is dangerous, and how its risks can be mitigated. It does so by providing both mathematical and theoretical explanations as well as hands-on coding examples. Two attacks and their countermeasures are implemented, namely an evasion attack and a membership inference attack. The user will be introduced not only in the intricacies of these attacks but get an idea of their risks in general. Machine Learning systems applied in all domains can be subject to adversarial attacks, making the topic paramount to practitioners and policy-makers alike.

In addition to the tutorial itself, we also provide you with a presentation taking a more conceptual angel to introducing Adversarial Machine Learning, a video walking you through the notebook, and a link to directly open the file in Google Colab:

Authors:

Open In Colab

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published