Skip to content

Change slightly Robustness with Image Classification tutorial #1567

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions tutorials/CIFAR_Captum_Robustness.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@
}
],
"source": [
"image_show(image, pred+ \" \" + str(score.item()))\n",
"image_show(image, pred + \" \" + str(score.item()))\n",
"image_show(unnormalize(perturbed_image_fgsm), new_pred_fgsm + \" \" + str(score_fgsm.item()))\n"
]
},
Expand Down Expand Up @@ -307,15 +307,15 @@
}
],
"source": [
"image_show(image, pred+ \" \" + str(score.item()))\n",
"image_show(image, pred + \" \" + str(score.item()))\n",
"image_show(unnormalize(perturbed_image_pgd.detach()), new_pred_pgd + \" \" + str(score_pgd.item()))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As seen above, the perturbed input is classified as a ship, confirming the targetted attack was successful. "
"As seen above, the perturbed input is classified as a ship, confirming the targeted attack was successful. "
]
},
{
Expand All @@ -338,7 +338,7 @@
"source": [
"In addition to adversarial attacks, we have developed an AttackComparator, which allows quantifying model performance against any set of perturbations or attacks, including custom transformations.\n",
"\n",
"In this section, we will use the AttackComparator to measure how this model performs against the FGSM / PGD attacks described above as well as torchvision transforms. Note that the attack comparator can be used with any perturbation or attack functions."
"In this section, we will use the AttackComparator to measure how this model performs against the FGSM / PGD attacks described above as well as torchvision transforms. Note that the AttackComparator can be used with any perturbation or attack functions."
]
},
{
Expand Down Expand Up @@ -442,7 +442,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The comparator also allows us to aggregate results over a series of batches. We start by resetting the stored metrics from this example, and evaluate a series of batches from the test dataset. Once complete, we can look at the summary returned by the Attack Comparator."
"The comparator also allows us to aggregate results over a series of batches. We start by resetting the stored metrics from this example, and evaluate a series of batches from the test dataset. Once complete, we can look at the summary returned by the AttackComparator."
]
},
{
Expand Down Expand Up @@ -585,7 +585,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that a kernel size of 5 was the minimum necessary to misclassify this image. Let's look at the perturbed image and corresponding prediction, and how this compares with the original."
"We see that a kernel size of 7 was the minimum necessary to misclassify this image. Let's look at the perturbed image and corresponding prediction, and how this compares with the original."
]
},
{
Expand Down Expand Up @@ -624,7 +624,7 @@
"image_show(alt_im, new_pred_blur + \" \" + str(score_blur.item()))\n",
"\n",
"# Original\n",
"image_show(image, pred+ \" \" + str(score.item()))\n"
"image_show(image, pred + \" \" + str(score.item()))\n"
]
},
{
Expand Down