TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in Conversation (NAACL 2024)
Key Libraries
- python 3.9
- requirements.txt
Each data is split into train/dev/test in the dataset folder.(However, we do not provide video clip here.)
for MELD
python MELD/teacher.py
python MELD/student.py
python MELD/fusion.py
for IEMOCAP
python IEMOCAP/teacher.py
python IEMOCAP/student.py
python IEMOCAP/fusion.py
- Goole Drive
- Unpack model.tar.gz and place each Save_model Folder within MELD and IEMOCAP
|- MELD/
| |- save_model/
|- ...
|- IEMOCAP/
| |- save_model/
|- ...
Running inference.py allows you to reproduce the results.
python MELD/inference.py
python IEMOCAP/inference.py
@article{yun2024telme,
title={TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in Conversation},
author={Yun, Taeyang and Lim, Hyunkuk and Lee, Jeonghwan and Song, Min},
journal={arXiv preprint arXiv:2401.12987},
year={2024}
}