MELD
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Install / Use
/learn @declare-lab/MELDREADME
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Note
🔥 If you are interested in IQ testing LLMs, check out our new work: AlgoPuzzleVQA
:fire: We have released the visual features extracted using Resnet - https://github.com/declare-lab/MM-Align
:fire: :fire: :fire: For updated baselines please visit this link: conv-emotion
:fire: :fire: :fire: For downloading the data use wget:
wget http://web.eecs.umich.edu/~mihalcea/downloads/MELD.Raw.tar.gz
Leaderboard

Updates
10/10/2020: New paper and SOTA in Emotion Recognition in Conversations on the MELD dataset. Refer to the directory COSMIC for the code. Read the paper -- COSMIC: COmmonSense knowledge for eMotion Identification in Conversations.
22/05/2019: MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation has been accepted as a full paper at ACL 2019. The updated paper can be found here - https://arxiv.org/pdf/1810.02508.pdf
22/05/2019: Dyadic MELD has been released. It can be used to test dyadic conversational models.
15/11/2018: The problem in the train.tar.gz has been fixed.
Research Works using MELD
Zhang, Yazhou, Qiuchi Li, Dawei Song, Peng Zhang, and Panpan Wang. "Quantum-Inspired Interactive Networks for Conversational Sentiment Analysis." IJCAI 2019.
Zhang, Dong, Liangqing Wu, Changlong Sun, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. "Modeling both Context-and Speaker-Sensitive Dependence for Emotion Detection in Multi-speaker Conversations." IJCAI 2019.
Ghosal, Deepanway, Navonil Majumder, Soujanya Poria, Niyati Chhaya, and Alexander Gelbukh. "DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation." EMNLP 2019.
Introduction
Multimodal EmotionLines Dataset (MELD) has been created by enhancing and extending EmotionLines dataset. MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance.
Example Dialogue

Dataset Statistics
| Statistics | Train | Dev | Test | |---------------------------------|---------|---------|---------| | # of modality | {a,v,t} | {a,v,t} | {a,v,t} | | # of unique words | 10,643 | 2,384 | 4,361 | | Avg. utterance length | 8.03 | 7.99 | 8.28 | | Max. utterance length | 69 | 37 | 45 | | Avg. # of emotions per dialogue | 3.30 | 3.35 | 3.24 | | # of dialogues | 1039 | 114 | 280 | | # of utterances | 9989 | 1109 | 2610 | | # of speakers | 260 | 47 | 100 | | # of emotion shift | 4003 | 427 | 1003 | | Avg. duration of an utterance | 3.59s | 3.59s | 3.58s |
Please visit https://affective-meld.github.io for more details.
Dataset Distribution
| | Train | Dev | Test | |----------|-------|-----|------| | Anger | 1109 | 153 | 345 | | Disgust | 271 | 22 | 68 | | Fear | 268 | 40 | 50 | | Joy | 1743 | 163 | 402 | | Neutral | 4710 | 470 | 1256 | | Sadness | 683 | 111 | 208 | | Surprise | 1205 | 150 | 281 |
Purpose
Multimodal data analysis exploits information from multiple-parallel data channels for decision making. With the rapid growth of AI, multimodal emotion recognition has gained a major research interest, primarily due to its potential applications in many challenging tasks, such as dialogue generation, multimodal interaction etc. A conversational emotion recognition system can be used to generate appropriate responses by analysing user emotions. Although there are numerous works carried out on multimodal emotion recognition, only a very few actually focus on understanding emotions in conversations. However, their work is limited only to dyadic conversation understanding and thus not scalable to emotion recognition in multi-party conversations having more than two participants. EmotionLines can be used as a resource for emotion recognition for text only, as it does not include data from other modalities such as visual and audio. At the same time, it should be noted that there is no multimodal multi-party conversational dataset available for emotion recognition research. In this work, we have extended, improved, and further developed EmotionLines dataset for the multimodal scenario. Emotion recognition in sequential turns has several challenges and context understanding is one of them. The emotion change and emotion flow in the sequence of turns in a dialogue make accurate context modelling a difficult task. In this dataset, as we have access to the multimodal data sources for each dialogue, we hypothesise that it will improve the context modelling thus benefiting the overall emotion recognition performance. This dataset can also be used to develop a multimodal affective dialogue system. IEMOCAP, SEMAINE are multimodal conversational datasets which contain emotion label for each utterance. However, these datasets are dyadic in nature, which justifies the importance of our Multimodal-EmotionLines dataset. The other publicly available multimodal emotion and sentiment recognition datasets are MOSEI, MOSI, MOUD. However, none of those datasets is conversational.
Dataset Creation
The first step deals with finding the timestamp of every utterance in each of the dialogues present in the EmotionLines dataset. To accomplish this, we crawled through the subtitle files of all the episodes which contains the beginning and the end timestamp of the utterances. This process enabled us to obtain season ID, episode ID, and timestamp of each utterance in the episode. We put two constraints whilst obtaining the timestamps: (a) timestamps of the utterances in a dialogue must be in increasing order, (b) all the utterances in a dialogue have to belong to the same episode and scene. Constraining with these two conditions revealed that in EmotionLines, a few dialogues consist of multiple natural dialogues. We filtered out those cases from the dataset. Because of this error correction step, in our case, we have the different number of dialogues as compare to the EmotionLines. After obtaining the timestamp of each utterance, we extracted their corresponding audio-visual clips from the source episode. Separately, we also took out the audio content from those video clips. Finally, the dataset contains visual, audio, and textual modality for each dialogue.
Paper
The paper explaining this dataset can be found - https://arxiv.org/pdf/1810.02508.pdf
Download the data
Please visit - http://web.eecs.umich.edu/~mihalcea/downloads/MELD.Raw.tar.gz to download the raw data. Data are stored in .mp4 format and can be found in XXX.tar.gz files. Annotations can be found in https://github.com/declare-lab/MELD/tree/master/data/MELD.
Description of the .csv files
Column Specification
| Column Name | Description | |--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Sr No. | Serial numbers of the utterances mainly for referencing the utterances in case of different versions or multiple copies with different subsets | | Utterance | Individual utterances from EmotionLines as a string. | | Speaker | Name of the speaker associated with the utterance.
