Skip to content

Latest commit

 

History

History
61 lines (51 loc) · 1.6 KB

README.md

File metadata and controls

61 lines (51 loc) · 1.6 KB

TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in Conversation (NAACL 2024)

Figure3 The overall flow of our model

Requirements

Key Libraries

  1. python 3.9
  2. requirements.txt

Datasets

Each data is split into train/dev/test in the dataset folder.(However, we do not provide video clip here.)

  1. MELD
  2. IEMOCAP

Train

for MELD

python MELD/teacher.py
python MELD/student.py
python MELD/fusion.py

for IEMOCAP

python IEMOCAP/teacher.py
python IEMOCAP/student.py
python IEMOCAP/fusion.py

Testing with pretrained TelME

|- MELD/
|   |- save_model/
       |- ...
|- IEMOCAP/
|   |- save_model/
       |- ...

Running inference.py allows you to reproduce the results.

python MELD/inference.py
python IEMOCAP/inference.py

Citation

@article{yun2024telme,
  title={TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in Conversation},
  author={Yun, Taeyang and Lim, Hyunkuk and Lee, Jeonghwan and Song, Min},
  journal={arXiv preprint arXiv:2401.12987},
  year={2024}
}