Official repository for Interpreting Vulnerabilities of Multi-Instance Learning to Adversarial Perturbations.
Any question can contact with inki.yinji@gmail.com
My home pages:
The code implements two MIL attackers: MI-CAP and MI-UAP
If the input is a bag with images, we annotate the file names as MI-CAP2D and MI-UAP2D
If the input is a bag with vectors, we annotate the file names as MI-CAP and MI-UAP
For ShanghaiTech and UCF-Crime data sets, just run MI-CAP and MI-UAP
For MNIST, CIFAR10, and STL10, just run MI-CAP2D and MI-UAP2D
For experimental parameters:
- xi: The magnitude of perturbation
- For ShanghaiTech and UCF-crime: the default setting is 0.01
- For images: 0.2
- mode: The computation mode for gradient, "ave" or "att"
- net_type: The attacked network, the choice includes:
- ab: ABMIL
- ga: GAMIL
- la: LAMIL
- ds: DSMIL
- ma: MAMIL
- data_type:
- For MI-CAP and MI-UAP: "shanghai" or "ucf";
- For MI_CAP2D and MI-UAP2D: "mnist", "cifar10", "stl10"
You can cite our paper as:
@article{Zhang:2023:109725,
author = {Yu-Xuan Zhang and Hua Meng and Xue-Mei Cao and Zheng Chun Zhou and Mei Yang and Avik Ranjan Adhikary},
title = {Interpreting vulnerabilities of multi-instance learning to adversarial perturbations},
journal = {Pattern Recognition},
pages = {109725},
year = {2023},
url = {https://doi.org/10.1016/j.patcog.2023.109725}
}