MHyEEG
Official PyTorch repository for multimodal emotion recognition wih hypercomplex models (ICASSPW 2023, RTSI 2024, MLSP 2024)
Install / Use
/learn @ispamm/MHyEEGREADME
Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological Signals :performing_arts:
Official PyTorch repository for the papers:
- Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological Signals, ICASSPW 2023. [IEEEXplore][ArXiv Preprint]
- Hierarchical Hypercomplex Network for Multimodal Emotion Recognition, MLSP 2024. [IEEEXplore][ArXiv Preprint]
- PHemoNet: A Multimodal Network for Physiological Signals, RTSI 2024. [IEEEXplore][ArXiv Preprint]
Authors:
Eleonora Lopez, Eleonora Chiarantano, Eleonora Grassucci, Aurelio Uncini, and Danilo Comminiello from ISPAMM Lab 🏘️
📰 News
- [2025.05.15] Released pretrained weights 💣
- [2025.05.14] Updated code with H2 and PHemoNet models from MLSP and RTSI papers! 👩🏻💻
- [2024.07] Extension papers have been accepted at MLSP and RTSI 2024!
- [2023.11.11] Code is available for HyperFuseNet! 👩🏼💻
- [2023.04.14] The paper has been accepted for presentation at ICASSP workshop 2023 🎉!
Overview :blush:
📚 Papers & Models
| Model | Paper | Arousal F1 | Arousal Acc | Valence F1 | Valence Acc | Highlights | Weights | |----------------------|-------------------------------------------------------------------------------------------------------------------------------|------------|-------------|------------|-------------|------------|---------| | 🥇 H2 | MLSP 2024 [IEEEXplore][ArXiv] | 0.557 | 56.91 | 0.685 | 67.87 | Hierarchical model with PHC-based encoders in modality-specific domains, achieves best performance | Arousal - Valence | 🥈 PHemoNet | RTSI 2024 [IEEEXplore][ArXiv] | 0.401 | 42.54 | 0.505 | 50.77 | PHM-based encoders with modality-specifc domains and revised hypercomplex fusion module | Arousal - Valence | 🥉 HyperFuseNet | ICASSPW 2023 [IEEEXplore][ArXiv] | 0.397 | 41.56 | 0.436 | 44.30 | Introduces hypercomplex fusion module | Arousal - Valence
How to use :scream:
Install requirements
pip install -r requirements.txt
Data preprocessing
-
Download the data from the official website.
-
Preprocess the data:
python data/preprocessing.py- This will create a folder for each subject with CSV files containing the preprocessed data and save everything inside
args.save_path.
- This will create a folder for each subject with CSV files containing the preprocessed data and save everything inside
-
Create torch files with augmented and split data:
python data/create_dataset.py- This performs data splitting and augmentation from the preprocessed data in step 2.
- You can specify which label to consider by setting the parameter
label_kindto eitherArslorVlnc. - The data is saved as .pt files which are used for training.
Training
To reproduce the results, use the corresponding configuration file for each model and task:
configs/h2.yml→ H2 modelconfigs/phemonet.yml→ PHemoNetconfigs/hyperfusenet_arousal.yml→ HyperFuseNet for valenceconfigs/hyperfusenet_valence.yml→ HyperFuseNet for arousal
Run training with:
python main.py --train_file_path /path/to/arsl_or_vlnc_train.pt --test_file_path /path/to/arsl_or_vlnc_test.pt --config configs/config.yml
To do a sweep (used in HyperFuseNet paper) run: python sweep.py
Experiments will be directly tracked on Weight&Biases.
Cite
Please cite our works if you found this repo useful 🫶
- H2 model:
@inproceedings{lopez2024hierarchical,
title={Hierarchical hypercomplex network for multimodal emotion recognition},
author={Lopez, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP)},
pages={1--6},
year={2024},
organization={IEEE}
}
- PHemoNet:
@inproceedings{lopez2024phemonet,
title={PHemoNet: A Multimodal Network for Physiological Signals},
author={Lopez, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2024 IEEE 8th Forum on Research and Technologies for Society and Industry Innovation (RTSI)},
pages={260--264},
year={2024},
organization={IEEE}
}
- HyperFuseNet:
@inproceedings{lopez2023hypercomplex,
title={Hypercomplex Multimodal Emotion Recognition from EEG and Peripheral Physiological Signals},
author={Lopez, Eleonora and Chiarantano, Eleonora and Grassucci, Eleonora and Comminiello, Danilo},
booktitle={2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
pages={1--5},
year={2023},
organization={IEEE}
}
Want more of hypercomplex models? :busts_in_silhouette:
Check out:
- Multi-view hypercomplex learning for breast cancer screening, under review at TMI, 2022 [Paper][GitHub]
- PHNNs: Lightweight neural networks via parameterized hypercomplex convolutions, IEEE Transactions on Neural Networks and Learning Systems, 2022 [Paper][GitHub].
- Hypercomplex Image-to-Image Translation, IJCNN, 2022 [Paper][GitHub]
Related Skills
node-connect
351.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
