This repository contains the replication of the ACL 2024 paper Stealthy Attack on Large Language Model based Recommendation.
In this paper, we demonstrate that attackers can significantly boost an item's exposure by merely altering its textual content during the testing phase, without requiring direct interference with the model's training process.
Using the following main dependencies:
- Python 3.9.0
- torch 2.0.1
- transformers 4.33.1
- textattack 0.3.9
- Prepare data following here.
- Put the processed data in
./finetune_data/{dataset_name}/
folder. It should contain 'meta_data.json', 'smap.json', 'umap.json', 'train.json', 'valid.json', 'test.json'.
- Download Longformer checkpoint from here. Put the checkpoint to
./longformer-base-4096/
. - You can pretrain your model or download pretrained checkpoints from here. Put the checkpoints to
./pretrain_ckpt/recformer_seqrec_ckpt.bin
. - If you want to attack the finetuned models, you can finetune the model and save the checkpoints following here. Put the checkpoints to
./checkpoints/{dataset_name}/best_model.bin
.
- Run the attack as:
python attack.py --attack textfooler --dataset beauty
. - Logs will be saved in
./logs
and the attack results will be saved in./results
. - Run
python inference.py
to evaluate the influence of the attack on the recommendation performance.
Please cite the paper if you use Recformer in your work:
@article{zhang2024stealthy,
title={Stealthy Attack on Large Language Model based Recommendation},
author={Zhang, Jinghao and Liu, Yuting and Liu, Qiang and Wu, Shu and Guo, Guibing and Wang, Liang},
journal={arXiv preprint arXiv:2402.14836},
year={2024}
}
The code is based on RecFormer and PromptBench. We thank the authors for their wonderful work.