Skip to content

Adversarial Attack for Pre-trained Code Models

License

Notifications You must be signed in to change notification settings

ZZR0/CodeAttack

Repository files navigation

CodeAttack

Generating adversarial examples for pre-trained code models

This repo is the artifacts of adversarial attack section of the paper An Extensive Study on Pre-trained Models for Program Understanding and Generation published in ISSTA'22.

After preparing the model and the dataset, first run dataset/*/transform_*.py to generate adversarial input samples. Then run run.sh to start all experiments.

This project is fork from TextAttack.

About

Adversarial Attack for Pre-trained Code Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published