CIDer
Codes for "Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts".
Install / Use
/learn @gw-zhong/CIDerREADME
Codes for Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts.
Usage
Clone the repository
git clone https://github.com/gw-zhong/CIDer.git
Download the datasets
- IID: CMU-MOSI & CMU-MOSEI (BERT) [align & unaligned]
- OOD: CMU-MOSI & CMU-MOSEI (BERT) [align & unaligned]
- BaiduYun Disk
code: 19db - Hugging Face
- BaiduYun Disk
- Cross-dataset: CMU-MOSI & CMU-MOSEI (BERT) [align]
Download the BERT models
- BaiduYun Disk
code: e7mw
Preparation
Create (empty) folder for results:
cd cider
mkdir results
and set the data_path and the model_path correctly in main.py, main_eval.py, and main_run.py.
Hyperparameter tuning
python main.py --[FLAGS]
Or, you can use the bash script for tuning:
bash scripts/run_all.sh
Please note that run_all.sh contains all the tasks and uses 8 GPUs for hyperparameter tuning. You should select one or several tasks for tuning according to your actual needs, instead of running all of them.
Evaluation
python main_eval.py --[FLAGS]
Guidance:
When conducting the evaluation, you need to correctly set the missing_mode in main_eval.py. The specific settings are as follows:
-
Our proposed RMFM:
--missing_mode RMFM -
Traditional RMFM:
--missing_mode RMFM_same -
RMM:
--missing_mode RMM -
TMFM:
--missing_mode TMFM -
STMFM:
--missing_mode STMFM -
SMM:
--missing_mode RMFM_sameand uncomment the sections inmain_eval.pyfrom line 169 to line 175 and line 188.
Single Training
python main_run.py --[FLAGS]
Reproduction
To facilitate the reproduction of the results in the paper, we have also uploaded the corresponding model weights:
- BaiduYun Disk
code: 885a - Hugging Face
You just need to run main_eval.py to reproduce the results.
Please note that when running the evaluation for the corresponding model, you should also modify the relevant task parameters in main_eval.py.
Citation
Please cite our paper if you find that useful for your research:
@article{zhong2025towards,
title={Towards Robust Multimodal Emotion Recognition under Missing Modalities and Distribution Shifts},
author={Zhong, Guowei and Huan, Ruohong and Wu, Mingzhen and Liang, Ronghua and Chen, Peng},
journal={arXiv preprint arXiv:2506.10452},
year={2025}
}
Contact
If you have any question, feel free to contact me through guoweizhong@zjut.edu.cn or gwzhong@zju.edu.cn.
Acknowledgment
Our code is based on MulT and SELF-MM. And our repartitioned MER OOD Datasets are based on CLUE. Thanks to their open-source spirit for saving us a lot of time.
