SkillAgentSearch skills...

AffectiveComputingKnowledgeExchange

This repository is a collection of datasets, models and approaches for affective computing. The goal is to provide a comprehensive overview of the current state of the art in the field of multimodal affect computing with a focus on emotion extraction from different modalities.

Install / Use

/learn @alexandrainst/AffectiveComputingKnowledgeExchange
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<a name="readme-top"></a>

Contributors [Stargazers][stars-url] MIT License

Affective Computing Knowledge Exchange

<!-- write a short introduction -->

This repository is a collection of datasets, models and approaches for affective computing. The goal is to provide a comprehensive overview of the current state of the art in the field of multimodal affect computing with a focus on emotion extraction from different modalities. The repository is structured as follows:

<!-- TABLE OF CONTENTS --> <details> <summary>Table of Contents</summary> <ol> <li><a href="#datasets">Datasets</a></li> <li> <a href="#models-and-approaches">Models and Approaches</a> <ul> <li> <a href="#visual-features">Visual Features</a> <ul> <li><a href="#keypoint-extractor">Keypoint extractor</a></li> <li><a href="#interpretation-of-visual-features">Interpretation of visual features</a></li> </ul> </li> <li> <a href="#audio-features">Audio Features</a> <ul> <li><a href="#feature-extractor">Feature extractor</a></li> <li><a href="#interpretation-of-audio-features">Interpretation of audio features</a></li> </ul> </li> <li><a href="#multimodal-features">Multimodal Features</a></li> </ul> </li> <li><a href="#evaluation">Evaluation</a></li> <li><a href="#contact">Contact</a></li> <li><a href="#acknowledgements">Acknowledgements</a></li> </ol> </details>

Datasets

<!-- Usage of the table Name: name of the dataset Year: year of publication Description: a short description of the dataset Tags: tags of the dataset e.g "audio" : :sound: , "video" : :movie_camera: , "image" : :camera: link: link to the dataset Licence: In what context is the dataset allowed to be used e.g "research only" -->

|Name|Description|Number of Subjects|Number of images/videos|Facial Expressions|Modalities|Licence| |:-|:-|:-:|:-:|:-|:-|:-:| | Aff-Wild2 | AffWild2 is a publicly available multimodal dataset for affect recognition and analysis, containing videos of people displaying a range of emotions and facial expressions. The dataset is split into two parts: AffWild2-Train, which includes around 1200 videos of various emotions and facial expressions, and AffWild2-Test, which consists of around 200 videos for evaluation purposes.|10|213 static images|neutral, happiness, sadness, surprise, fear, disgust, anger + valence-arousal + action units 1,2,4,6,12,15,20,25|:camera: :movie_camera: :sound: | non-commercial | |JAFFE| The Japanese Female Facial Expression Dataset contains 10 japanese female expressers, each expressing 7 basic emotions (anger, disgust, fear, happiness, sadness, surprise, and neutral) for 3 times. The dataset contains 213 images.|458|2.800.000 manually annotated|neutral, sadness, surprise, happiness, fear, anger, and disgust|:camera:|non-commercial| |RAVDESS|Actors either talking or sining two different sentences in a neutral, calm, happy, sad, angry, fearful, disgust, or suprised tone of voice.|24|2452 videos|neutral, calm, happy, sad, angry, fearful, disgust, suprised|:movie_camera: :sound:|non-commercial|

<p align="right">[<a href="#readme-top">back to top</a>]</p>

Data sets

<!-- Usage of the table Name: name of the dataset Year: year of publication Description: a short description of the dataset Tags: tags of the dataset e.g "audio" : :sound: , "video" : :movie_camera: , "image" : :camera: link: link to the dataset Licence: In what context is the dataset allowed to be used e.g "research only" -->

|Name|Year|Description|Number of Subjects|Mental Disorder|Labels|Modalities|Availability| |:-|:-|:-|:-:|:-:|:-|:-|:-:| |Depresjon|2018| A Motor Activity Database of Depression Episodes in Unipolar and Bipolar Patients| 55 |depression, unipolar/bipolar|MADRS|motor activity recordings|free to download| |AVEC2014|2014| The Audio/Visual Emotion Challenge and Workshop | xx | depression | PHQ | | upon request| |AVEC2013|2013| The Audio/Visual Emotion Challenge and Workshop | 292 | depression |BDI, valence, arousal|audio features|free to download| |-|

Models and Approaches

<p align="right">[<a href="#readme-top">back to top</a>]</p>

Visual Features

<!-- List of models/approaches that focus on visual input only --> <p align="right">[<a href="#readme-top">back to top</a>]</p>

Keypoint extractor

|Name[Link]|Description|Tags| |:-|:-|:-| |MediaPipe |MediaPipe is a framework for building multimodal applied machine learning pipelines. It provides a unified platform for the components that make up an ML pipeline, including Face Mesh,Iris Tracking,Hand Tracking, Holistic, Pose Tracking.|| |OpenPose| OpenPose is a real-time multi-person keypoint detection library for body, face, hands, and foot estimation. || |OpenFace|Facial Behavorial Analysis Toolkit that allows to perform facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation. || |3DDFA_V2|3DDFA_V2 is a PyTorch implementation of 3DDFA, which is a 3D Morphable Model (3DMM) based face alignment and reconstruction framework. ||

<p align="right">(<a href="#readme-top">back to top</a>)</p>

Interpretation of visual features

|Name[Link]|Description|Targets| |:-|:-|:-| |Facial Action Coding System|Movements of individual facial muscles are encoded by the FACS from slight different instant changes in facial appearance. |AU| |Multi-Task Learning Framework for Emotion Recognition in-the-wild|Features of three different feature extractors are fused and then passed into a temporal model. The output is forwarded to two different regression models (one for valence and one for arousal) and to two classification models (one for emotion and one for action units).|VA, EmoRec, AU|

<p align="right">[<a href="#readme-top">back to top</a>]</p>

Audio Features

<!-- List of models/approaches that focus on audio input only --> <p align="right">[<a href="#readme-top">back to top</a>]</p>

Feature extractor

|Name[Link]|Description|Tags| |:-|:-|:-|

<p align="right">[<a href="#readme-top">back to top</a>]</p>

Interpretation of audio features

|Name[Link]|Description|Tags| |:-|:-|:-|

<p align="right">[<a href="#readme-top">back to top</a>]</p>

Multimodal Features

<!-- List of models/approaches that focus on multimodal input -->

|Name[Link]|Description|Tags| |:-|:-|:-|

<p align="right">[<a href="#readme-top">back to top</a>]</p>

Evaluation

<!-- List of evaluation metrics for affective computing models and approaches -->

This section describes how to quatify affective computing models and approaches. |Name[Link]|Description|metric|source |:-|:-|:-|:-| |Valence-Arousal (VA) Space|The valence-arousal space is a two-dimensional space that represents the affective states of a person. The valence axis represents the positive-negative dimension, while the arousal axis represents the active(high)-passive(low) dimension.|Concordance Correlation Coefficient|ibug |Expression Clasification|The expression classification is a classification task that classifies the facial expression into one of the following categories: neutral, happy, sad, surprise, fear, disgust, anger, contempt.|Accuracy, Precision, Recall, F1|ibug |Facial Action Coding System (FACS)|The Facial Action Coding System (FACS) is a coding system for describing facial movements. The FACS is a system of 46 action units (AUs) that are defined by the location and movement of the facial muscles.|Accuracy, Precision, Recall, F1|ibug |Emotion Reaction Intensity (ERI)|The emotion reaction intensity (ERI) is a scale that measures the intensity of the emotional reaction to a stimulus.|average pearson’s correlations coefficient (ρ) across the 7 emotional reactions| ibug

<p align="right">[<a href="#readme-top">back to top</a>]</p>

Contact

<!-- List of people who contributed to this Knowledge Exchange Repository --> <p align="right">[<a href="#readme-top">back to top</a>]</p>

Acknowledgements

<!-- List of people who contributed to this Knowledge Exchange Repository -->
<p align="right">[<a href="#readme-top">back to top</a>]</p> <!-- MARKDOWN LINKS & IMAGES --> <!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
View on GitHub
GitHub Stars12
CategoryDevelopment
Updated28d ago
Forks0

Security Score

95/100

Audited on Mar 12, 2026

No findings