SMORE: Spectrum-based Modality Representation Fusion Graph Convolutional Network for Multimodal Recommendation
This is the Pytorch implementation for our WSDM 2025 paper:
[WSDM 2025] Rongqing Kenneth Ong, Andy W. H Khong (2025). Spectrum-based Modality Representation Fusion Graph Convolutional Network for Multimodal Recommendation
- Python 3.7
- Pytorch 1.13
Download from Google Drive: Baby/Sports/Clothing
The data comprises text and image features extracted from Sentence-Transformers and CNN.
- Place the downloaded data (e.g.
baby
) into thedata
directory. - Enter the
src
folder and execute the following command:
python main.py -m SMORE -d baby
Other parameters can be set either through the command line or by using the configuration files located in configs/model/SMORE.yaml
and configs/dataset/*.yaml
.
We present the optimal hyperparameters for SMORE to replicate the results shown in Table 2 of our paper:
Datasets | n_ui_layers | n_layers | image_knn_k | text_knn_k | cl_loss | reg_weight | dropout_rate |
---|---|---|---|---|---|---|---|
Baby | 4 | 1 | 40 | 15 | 0.01 | 1e-04 | 0.1 |
Sports | 3 | 1 | 10 | 10 | 0.03 | 1e-04 | 0 |
Clothing | 3 | 1 | 40 | 10 | 0.01 | 1e-05 | 0 |
If you find SMORE useful in your research, please consider citing our paper.
@article{ong2024spectrum,
title={Spectrum-based Modality Representation Fusion Graph Convolutional Network for Multimodal Recommendation},
author={Ong, Rongqing Kenneth and Khong, Andy WH},
journal={arXiv preprint arXiv:2412.14978},
year={2024}
}
This code is made available solely for academic research purposes.
The structure of this code is inspired by the MMRec framework. We acknowledge and appreciate their valuable contributions.