Skip to content

yuntaeyang/TelME

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in Conversation (NAACL 2024)

Figure3 The overall flow of our model

Requirements

Key Libraries

  1. python 3.9
  2. requirements.txt

Datasets

Each data is split into train/dev/test in the dataset folder.(However, we do not provide video clip here.)

  1. MELD
  2. IEMOCAP

Train

for MELD

python MELD/teacher.py
python MELD/student.py
python MELD/fusion.py

for IEMOCAP

python IEMOCAP/teacher.py
python IEMOCAP/student.py
python IEMOCAP/fusion.py

Testing with pretrained TelME

|- MELD/
|   |- save_model/
       |- ...
|- IEMOCAP/
|   |- save_model/
       |- ...

Running inference.py allows you to reproduce the results.

python MELD/inference.py
python IEMOCAP/inference.py

Citation

@article{yun2024telme,
  title={TelME: Teacher-leading Multimodal Fusion Network for Emotion Recognition in Conversation},
  author={Yun, Taeyang and Lim, Hyunkuk and Lee, Jeonghwan and Song, Min},
  journal={arXiv preprint arXiv:2401.12987},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages