This is the pytorch implementation of E2URec proposed in the paper Towards Efficient and Effective Unlearning of Large Language Models for Recommendation. (Frontiers of Computer Science 2024)
pip install -r requirements.txt
Scripts for data preprocessing are included in data_preprocess.
First, use ml-1m.ipynb to preprocess MovieLens-1M.
Then, convert data into text
python data2json.py --K 10 --temp_type simple --set train --dataset ml-1m
python data2json.py --K 10 --temp_type simple --set valid --dataset ml-1m
python data2json.py --K 10 --temp_type simple --set test --dataset ml-1m
Finally, use split_ml-1m.ipynb to split train/valid/test, retained/forgotten data.
Our method E2URec
can be trained by
sh train_e2urec.sh
We also provide shell scripts for baselines.
To run the Retrain
baseline:
sh train_normal.sh
To run the SISA
baseline:
sh train_sisa.sh
To run the NegGrad
baseline:
sh train_ga.sh
To run the Bad-T
baseline:
sh train_rl.sh