Pytorch implementation for codes in CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotations of Modality (ACL2020)
CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotations of Modality
Please cite our paper if you find our work useful for your research:
@inproceedings{yu2020ch,
title={CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of Modality},
author={Yu, Wenmeng and Xu, Hua and Meng, Fanyang and Zhu, Yilin and Ma, Yixiao and Wu, Jiele and Zou, Jiyun and Yang, Kaicheng},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
pages={3718--3727},
year={2020}
}
- You can download CH-SIMS from the following links.
md5:
6a92dccd83373b48ac83257bddab2538
- Baidu Yun Disk[code:
ozo2
] - Google Drive
In this framework, we support the following methods:
Type | Model Name | From |
---|---|---|
Single-Task | EF_LSTM | MultimodalDNN |
Single-Task | LF_DNN | - |
Single-Task | TFN | TensorFusionNetwork |
Single-Task | LMF | Low-rank-Multimodal-Fusion |
Single-Task | MFN | Memory-Fusion-Network |
Single-Task | MulT(without CTC) | Multimodal-Transformer |
Multi-Task | MLF_DNN | - |
Multi-Task | MTFN | - |
Multi-Task | MLMF | - |
- Clone this repo and install requirements.
git clone https://github.com/thuiar/MMSA
cd MMSA
pip install -r requirements.txt
If you want to extract features from raw videos, you can use the following steps. Or you can directly use the feature data provided by us.
- fetch audios and aligned faces (see
data/DataPre.py
)
- Install ffmpeg toolkits
sudo apt update
sudo apt install ffmpeg
- Run
data/DataPre.py
python data/DataPre.py --data_dir [path_to_CH-SIMS]
- get features (see
data/getFeature.py
)
- Download Bert-Base, Chinese from Google-Bert.
- Convert Tensorflow into pytorch using transformers-cli
- Install Openface Toolkits
- Run
data/getFeature.py
python data/getFeature.py --data_dir [path_to_CH-SIMS] --openface2Path [path_to_FeatureExtraction] -- pretrainedBertPath [path_to_pretrained_bert_directory]
- Then, you can see the preprocessed features in the
path/to/CH-SIMS/Processed/features/data.npz
python run.py --modelName *** --datasetName sims --tasks MTAV