Generating adversarial examples for pre-trained code models
This repo is the artifacts of adversarial attack section of the paper An Extensive Study on Pre-trained Models for Program Understanding and Generation published in ISSTA'22.
After preparing the model and the dataset, first run dataset/*/transform_*.py
to generate adversarial input samples. Then run run.sh
to start all experiments.
This project is fork from TextAttack.