Paper - [ArXiv] [ACL Anthology]
- RDRec: Rationale Distillation for LLM-based Recommendation, ACL 2024 Main (short).
- Xinfeng Wang, Jin Cui, Yoshimi Suzuki, Fumiyo Fukumoto.
- Please use the latest code released on June 11th, 2024.
- The checkpoints of the RDRec model for Step 2 were uploaded on Google Drive and Baidu Drive.
- The experimental setup follows POD. If there is any problem, please check our code or this paper.
get the License from [the site](https://llama.meta.com/llama-downloads/)
>> cd llama
>> ./download.sh (License required)
>> pip install -e .
>> torchrun --nproc_per_node 1 example_chat_completion.py \
--ckpt_dir llama-2-7b-chat/ \
--tokenizer_path tokenizer.model \
--max_seq_len 512 --max_batch_size 6
>> torchrun --nproc_per_node 1 data/{dataset}/distillation_{dataset}.py \
--ckpt_dir llama/llama-2-7b-chat/ \
--tokenizer_path llama/tokenizer.model \
--max_seq_len 512 --max_batch_size 6
>> pip install -r requirements.txt
>> python pretrain.py --data_dir ./data/{dataset}/ --cuda --batch_size 64 --checkpoint ./checkpoint/{dataset}/
>> python seq.py --data_dir ./data/{dataset}/ --cuda --batch_size 32 --checkpoint ./checkpoint/{dataset}/
>> python topn.py --data_dir ./data/{dataset}/ --cuda --batch_size 32 --checkpoint ./checkpoint/{dataset}/
>> python exp.py --data_dir ./data/{dataset}/ --cuda --batch_size 32 --checkpoint ./checkpoint/{dataset}/
- All experiments, including rationale distillation, can be conducted on a single Nvidia GeForce RTX 3090 (24GB memory). Reduce the batch size if you encounter an OOM error on some dataset.
- There are some fluctuations in RDRec's results for sequential recommendations. We reported average results in 10-trial runs in the paper (See t_test.py for more details). If the results are not ideal, please pre-train the model once again.
- If you have any questions, please feel free to contact me at kaysenn@163.com.
If this repository helps you, please cite:
@article{wang2024rdrec,
title={RDRec: Rationale Distillation for LLM-based Recommendation},
author={Wang, Xinfeng and Cui, Jin and Suzuki, Yoshimi and Fukumoto, Fumiyo},
journal={arXiv preprint arXiv:2405.10587},
year={2024}
}
- Enhancing High-order Interaction Awareness in LLM-based Recommender Model, EMNLP 2024 Main.
- Xinfeng Wang, Jin Cui, Fumiyo Fukumoto, and Yoshimi Suzuki.