From 6531041e5e8d9606b91bc0119715d774cb9812b0 Mon Sep 17 00:00:00 2001 From: keykholt Date: Fri, 11 Aug 2023 09:38:50 -0700 Subject: [PATCH] Update README.md Signed-off-by: keykholt --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d3a898d..d57e352 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ Run `pip install -e .` in the top level directory. Machine learning models are known to be vulnerable to adversarial evasion attacks as illustrated by image classification models. Thoroughly understanding such attacks is critical in order to ensure the safety and robustness of critical AI tasks. However, most adversarial attacks are difficult to deploy against a majority of AI systems because they have focused on image domain with only few constraints. -URET is a solution that enables users to evaluate their models against adversarial evasion attacks regardless of data representation or model arhitecture. In order to generate adversarial examples for a chosen model and data domain, a user does the following: +URET is a solution that enables users to evaluate their models against adversarial evasion attacks regardless of data representation or model architecture. In order to generate adversarial examples for a chosen model and data domain, a user does the following: 1. Select/Define one or more **Data Transformers**. 2. Select one or more **Explorer Configurationss** and define its exploration parameters.