IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT (SIGIR2024)
If you are interested in adopting parameter-efficient fine-tuning (PEFT) in recommendation you can also refer to our previous WSDM 2024 paper: Adapter4Rec
- Release the IISAN(Uncached)
- Release baseline approaches
- Release the IISAN(Cached) --By April 30, 2024 (Completed early on April 15, 2024)
- Release Datasets and IISAN(Cached)'s hidden states
If you encounter any questions or discover a bug within the paper or code, please do not hesitate to open an issue or submit a pull request.
Multimodal foundation models are transformative in sequential recommender systems, leveraging powerful representation learning capabilities. While Parameter-efficient Fine-tuning (PEFT) is commonly used to adapt foundation models for recommendation tasks, most research prioritizes parameter efficiency, often overlooking critical factors like GPU memory efficiency and training speed. Addressing this gap, our paper introduces IISAN (Intra- and Inter-modal Side Adapted Network for Multimodal Representation), a simple plug-and-play architecture using a Decoupled PEFT structure and exploiting both intra- and inter-modal adaptation.
IISAN matches the performance of full fine-tuning (FFT) and state-of-the-art PEFT. More importantly, it significantly reduces GPU memory usage — from 47GB to just 3GB for multimodal sequential recommendation tasks. Additionally, it accelerates training time per epoch from 443s to 22s compared to FFT. This is also a notable improvement over the Adapter and LoRA, which require 37-39 GB GPU memory and 350-380 seconds per epoch for training.
Furthermore, we propose a new composite efficiency metric, TPME (Training-time, Parameter, and GPU Memory Efficiency) to alleviate the prevalent misconception that "parameter efficiency represents overall efficiency". TPME provides more comprehensive insights into practical efficiency comparisons between different methods. Besides, we give an accessible efficiency analysis of all PEFT and FFT approaches, which demonstrate the superiority of IISAN.
conda create -n iisan python=3.8
conda activate iisan
pip install torch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 loralib==0.1.1 transformers==4.20.1 lmdb pandas
The complete textual recommendation datasets are available under the Dataset directory.
Download the image files:
"am_image_is.zip" for Scientific dataset from this link
"am_image_mi.zip" for Instruments dataset from this link
"am_image_op.zip" for Office dataset from this link
You should unzip these zip files under "Dataset/". Then run the following to get the lmdb files:
cd Dataset/
python build_lmdb.py
Download "pytorch_model.bin" of the vit-base-patch16-224 from this link and bert-base-uncased from this link. Then put them under the respective subfolder under "pretrained_models/".
cd Code_Uncached/scripts/
python run_IISAN.py
Note: Theoretically, IISAN(Cached) will only improve the training efficiency and maintain the original performance of IISAN(Uncached).
cd Code_Cached/
python preprocess_vectors.py
cd scripts/
python run_IISAN.py
where
If you find our paper useful in your work, please cite our paper as:
@inproceedings{fu2024iisan,
title={IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT},
author={Fu, Junchen and Ge, Xuri and Xin, Xin and Karatzoglou, Alexandros and Arapakis, Ioannis and Wang, Jie and Jose, Joemon M},
booktitle={Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages={687--697},
year={2024}
}
Our GAIR Lab, specializing in generative AI solutions for information retrieval tasks, is actively seeking highly motivated Ph.D. students with a strong background in artificial intelligence.
If you're interested, please contact Prof. Joemon Jose at joemon.jose@glasgow.ac.uk.