Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge: mnishant9001/fix-ui #5

Merged
merged 4 commits into from
May 23, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 21 additions & 18 deletions 2023/ML01_2023-Adversarial_Attack.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,31 @@
---

layout: col-sidebar
type: documentation
altfooter: true
level: 4
auto-migrated: 0
pitch:
document: OWASP Machine Learning Security Top Ten 2023
year: 2023
order: 6
title: ML01:2023:Adversarial_Attack
title: ML01:2023 Adversarial Attack
lang: en
author:
contributors:
tags: OWASP Top Ten 2023, Top Ten, ML01:2023
tags: OWASP Top Ten 2023, Top Ten, ML01:2023, mltop10
exploitability: 5
prevalence:
detectability: 3
technical: 5
redirect_from:

---

RISK Chart for Scenario One:
shsingh marked this conversation as resolved.
Show resolved Hide resolved

| Threat agents/Attack vectors | Security Weakness | Impact | | |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|---|---|
| Exploitability: 5 (Easy to exploit)<br>ML Application Specific: 4 <br>ML Operations Specific: 3 | Detectability: 3<br>(The adversarial image may not be noticeable to the naked eye, making it difficult to detect the attack) | Technical: 5<br>(The attack requires technical knowledge of deep learning and image processing techniques) | | |
| Threat Agent: Attacker with knowledge of deep learning and image processing techniques<br>Attack Vector: Deliberately crafted adversarial image that is similar to a legitimate image | Vulnerability in the deep learning model's ability to classify images accurately | Misclassification of the image, leading to security bypass or harm to the system | | |
| | | | | |
| Threat agents/Attack vectors | Security Weakness | Impact |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------: |
| Exploitability: 5 (Easy to exploit) ML Application Specific: 4 ML Operations Specific: 3 | Detectability: 3 (The adversarial image may not be noticeable to the naked eye, making it difficult to detect the attack) | Technical: 5 (The attack requires technical knowledge of deep learning and image processing techniques) |
| Threat Agent: Attacker with knowledge of deep learning and image processing techniques Attack Vector: Deliberately crafted adversarial image that is similar to a legitimate image | Vulnerability in the deep learning model's ability to classify images accurately | Misclassification of the image, leading to security bypass or harm to the system |

It is important to note that this chart is only a sample based on
scenario below, and the actual risk assessment will depend on the
Expand Down Expand Up @@ -60,18 +63,18 @@ lead to data theft, system compromise, or other forms of damage.
**How to Prevent:**

1. Adversarial training: One approach to defending against adversarial
attacks is to train the model on adversarial examples. This can help
the model become more robust to attacks and reduce its
susceptibility to being misled.
attacks is to train the model on adversarial examples. This can help
the model become more robust to attacks and reduce its
susceptibility to being misled.

2. Robust models: Another approach is to use models that are designed
to be robust against adversarial attacks, such as adversarial
training or models that incorporate defense mechanisms.
to be robust against adversarial attacks, such as adversarial
training or models that incorporate defense mechanisms.

3. Input validation: Input validation is another important defense
mechanism that can be used to detect and prevent adversarial
attacks. This involves checking the input data for anomalies, such
as unexpected values or patterns, and rejecting inputs that are
likely to be malicious.
mechanism that can be used to detect and prevent adversarial
attacks. This involves checking the input data for anomalies, such
as unexpected values or patterns, and rejecting inputs that are
likely to be malicious.

**References:**
17 changes: 10 additions & 7 deletions 2023/ML02_2023-Data_Poisoning_Attack.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,30 @@
---

layout: col-sidebar
type: documentation
altfooter: true
level: 4
auto-migrated: 0
pitch:
document: OWASP Machine Learning Security Top Ten 2023
year: 2023
order: 6
title: ML02:2023:Data_Poisoning_Attack
title: ML02:2023 Data Poisoning Attack
lang: en
author:
contributors:
tags: OWASP Top Ten 2023, Top Ten, ML02:2023
tags: OWASP Top Ten 2023, Top Ten, ML02:2023, mltop10
exploitability: 3
prevalence:
detectability: 2
technical: 4
redirect_from:

---

| Threat agents/Attack vectors | Security Weakness | Impact |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------:|
| Exploitability: 3 (Medium to exploit)<br>ML Application Specific: 4 <br>ML Operations Specific: 3 | Detectability: 2<br>(Limited) | Technical: 4<br> |
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------: |
| Exploitability: 3 (Medium to exploit)<br>ML Application Specific: 4 <br>ML Operations Specific: 3 | Detectability: 2<br>(Limited) | Technical: 4<br> |
| Threat Agent: Attacker who has access to the training data used for the model.<br>Attack Vector: The attacker injects malicious data into the training data set. | Lack of data validation and insufficient monitoring of the training data. | The model will make incorrect predictions based on the poisoned data, leading to false decisions and potentially serious consequences. |


It is important to note that this chart is only a sample based on
scenario below, and the actual risk assessment will depend on the
specific circumstances of each machine learning system.
Expand Down
21 changes: 12 additions & 9 deletions 2023/ML03_2023-Model_Inversion_Attack.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,29 @@
---

layout: col-sidebar
type: documentation
altfooter: true
level: 4
auto-migrated: 0
pitch:
document: OWASP Machine Learning Security Top Ten 2023
year: 2023
order: 6
title: ML03:2023:Model_Inversion_Attack
title: ML03:2023 Model Inversion Attack
lang: en
author:
contributors:
tags: OWASP Top Ten 2023, Top Ten, ML03:2023
tags: OWASP Top Ten 2023, Top Ten, ML03:2023, mltop10
exploitability: 4
prevalence:
detectability: 2
technical: 4
redirect_from:

---

| Threat agents/Attack vectors | Security Weakness | Impact |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------:|
| Exploitability: 4 (Medium to exploit)<br>ML Application Specific: 5 <br>ML Operations Specific: 3 | Detectability: 2<br>(Limited) | Technical: 4<br>Moderate technical knowledge required to carry out the attack)<br> |
| Threat Agents: Attackers who have access to the model and input data<br>Attack Vectors: Submitting an image to the model and analyzing the model's response | Model's output can be used to infer sensitive information about the input data | Confidential information about the input data can be compromised |

| Threat agents/Attack vectors | Security Weakness | Impact |
| :---------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: |
| Exploitability: 4 (Medium to exploit)<br>ML Application Specific: 5 <br>ML Operations Specific: 3 | Detectability: 2<br>(Limited) | Technical: 4<br>Moderate technical knowledge required to carry out the attack)<br> |
| Threat Agents: Attackers who have access to the model and input data<br>Attack Vectors: Submitting an image to the model and analyzing the model's response | Model's output can be used to infer sensitive information about the input data | Confidential information about the input data can be compromised |

It is important to note that this chart is only a sample based on
scenario below, and the actual risk assessment will depend on the
Expand Down
16 changes: 10 additions & 6 deletions 2023/ML04_2023-Membership_Inference_Attack.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,27 @@
---

layout: col-sidebar
type: documentation
altfooter: true
level: 4
auto-migrated: 0
pitch:
document: OWASP Machine Learning Security Top Ten 2023
year: 2023
order: 6
title: ML04:2023:Membership_Inference_Attack
title: ML04:2023 Membership Inference Attack
lang: en
author:
contributors:
tags: OWASP Top Ten 2023, Top Ten, ML04:2023
tags: OWASP Top Ten 2023, Top Ten, ML04:2023, mltop10
exploitability:
prevalence:
detectability:
technical:
redirect_from:

---

| Threat agents/Attack vectors | Security Weakness | Impact |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Threat agents/Attack vectors | Security Weakness | Impact |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| Exploitability: 4 (Effort required to exploit this weakness is moderate)<br>ML Application Specific: 5 <br>ML Operations Specific: 3 | Detectability: 3 <br>(Detection of this attack is moderately challenging) | Technical: 4 <br>(Moderate technical skill required)<br> |
| Hackers or malicious actors who have access to the data and the model<br>Insiders who have malicious intent or are bribed to interfere with the data<br> <br>Unsecured data transmission channels that allow unauthorized access to the data | Lack of proper data access controls<br>Lack of proper data validation and sanitization techniques<br>Lack of proper data encryption<br>Lack of proper data backup and recovery techniques | Unreliable or incorrect model predictions<br>Loss of confidentiality and privacy of sensitive data<br>Legal and regulatory compliance violations<br>Reputational damage |

Expand Down
16 changes: 10 additions & 6 deletions 2023/ML05_2023-Model_Stealing.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,28 @@
---

layout: col-sidebar
type: documentation
altfooter: true
level: 4
auto-migrated: 0
pitch:
document: OWASP Machine Learning Security Top Ten 2023
year: 2023
order: 6
title: ML05:2023:Model_Stealing
title: ML05:2023 Model Stealing
lang: en
author:
contributors:
tags: OWASP Top Ten 2023, Top Ten, ML05:2023
tags: OWASP Top Ten 2023, Top Ten, ML05:2023, mltop10
exploitability: 4
prevalence:
detectability: 3
technical: 4
redirect_from:

---

| Threat agents/Attack vectors | Security Weakness | Impact |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Exploitability: 4 (Effort required to exploit this weakness is moderate)<br>ML Application Specific: 4<br>ML Operations Specific: 3 | Detectability: 3 <br>(Detection of this attack is moderately challenging) | Technical: 4 <br>(Moderate technical skill required)<br> |
| :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| Exploitability: 4 (Effort required to exploit this weakness is moderate)<br>ML Application Specific: 4<br>ML Operations Specific: 3 | Detectability: 3 <br>(Detection of this attack is moderately challenging) | Technical: 4 <br>(Moderate technical skill required)<br> |
| Agent/Attack Vector: This refers to the entity that carries out the attack, in this case, it is an attacker who wants to steal the machine learning model. | Unsecured model deployment: The unsecured deployment of the model makes it easier for the attacker to access and steal the model. | The impact of a model theft could be both on the confidentiality of the data used to train the model and the reputation of the organization that developed the model.<br>Confidentiality, Reputation |

It is important to note that this chart is only a sample based on
Expand Down
Loading