ProTIP: Probabilistic Robustness Verification on Text-to-Image Diffusion Models against Stochastic Perturbation
Yi Zhang, Xingyu Zhao*, Yun Tang, Wenjie Ruan, Xiaowei Huang, Siddartha Khastgir, Paul Jennings
*Corresponding Author
We propose an efficient framework, ProTIP, to evaluate the probabilistic robustness of Text-to-Image Diffusion Model with statistical guarantees.
- [2024/7/1] Our work has been accepted by The 18th European Conference on Computer Vision ECCV 2024 !
Text-to-Image (T2I) Diffusion Models (DMs) have shown impressive abilities in generating high-quality images based on simple text descriptions. However, as is common with many Deep Learning (DL) models, DMs are subject to a lack of robustness. While there are attempts to evaluate the robustness of T2I DMs as a binary or worst-case problem, they cannot answer how robust in general the model is whenever an adversarial example (AE) can be found. In this study, we first introduce a probabilistic notion of T2I DMs' robustness; and then establish an efficient framework, ProTIP, to evaluate it with statistical guarantees. The main challenges stem from: (i) the high computational cost of the generation process; and (ii) determining if a perturbed input is an AE involves comparing two output distributions, which is fundamentally harder compared to other DL tasks like classification where an AE is identified upon misprediction of labels. To tackle the challenges, we employ sequential analysis with efficacy and futility early stopping rules in the statistical testing for identifying AEs, and adaptive concentration inequalities to dynamically determine the just-right number of stochastic perturbations whenever the verification target is met. Empirical experiments validate the effectiveness and efficiency of ProTIP over common T2I DMs. Finally, we demonstrate an application of ProTIP to rank commonly used defence methods.
The code has been tested with the following environment:
git clone https://github.com/wellzline/ProTIP.git
cd ProTIP
conda env create --name ProTIP
source activate ProTIP
pip install -r requirements.txt
cd generate_AE
python char_level.py
|-- generate_AE
| |-- coco
| | |-- char_AE
| | | |-- result_1.csv
| | | |-- result_2.csv
| | | |-- ..
| |-- origin_prompts
| | |-- coco.txt
| | |-- candidates.txt
| |-- attack_test.py
| |-- char_level.py
origin_prompt_path = "./generate_AE/origin_prompts/coco.txt"
num_inference_steps = 50
num_batch = 5
batch_size = 12
sample_num = 800
model_id = "runwayml/stable-diffusion-v1-5"
clip_version = "openai/clip-vit-large-patch14-336"
e_threshold = 0.08
sigma = 0.3
stop_early = 0
python version_sample.py
python version_defence.py
- Models
- Data acquisition and processing
If you find this repo useful, please cite:
@article{zhang2024protip,
title={ProTIP: Probabilistic Robustness Verification on Text-to-Image Diffusion Models against Stochastic Perturbation},
author={Zhang, Yi and Tang, Yun and Ruan, Wenjie and Huang, Xiaowei and Khastgir, Siddartha and Jennings, Paul and Zhao, Xingyu},
booktitle={European Conference on Computer Vision},
year={2024},
organization={Springer}
}