Automatic Recognition of Emotional Subgroups in Images

Authors: Emmeke Veltmeijer, Charlotte Gerritsen, Koen Hindriks

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on these strategies, algorithms are developed to automatically recognize emotional subgroups. In particular, K-means and hierarchical clustering are used with location and emotion features derived from a fine-tuned VGG network. Additionally, we experiment with face size and gaze direction as extra input features. The best performance comes from hierarchical clustering with emotion, location and gaze direction as input.
Researcher Affiliation Academia Emmeke Veltmeijer , Charlotte Gerritsen and Koen Hindriks Department of Computer Science, Vrije Universiteit Amsterdam e.a.veltmeijer@vu.nl
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Code and supplementary material will be made available at https://github.com/Emmekea/emotional-subgroup-recognition.
Open Datasets Yes For this study, images are selected from three different datasets: EMOTIC [Kosti et al., 2019], GAFF 3.0 [Dhall et al., 2018], and HAPPEI [Dhall et al., 2012]. ... The individual emotion recognition module is trained on the RAF-DB database [Li et al., 2017].
Dataset Splits No For this study, images are selected from three different datasets: EMOTIC [Kosti et al., 2019], GAFF 3.0 [Dhall et al., 2018], and HAPPEI [Dhall et al., 2012]. A small subset is selected... This results in the selection of 171 images in total... Training is done with the VGG-Face network... We did not find explicit mention of training, validation, or test splits for the datasets used in their experiments, nor details on cross-validation.
Hardware Specification No We thank SURFsara (www.surfsara.nl) for the support in using the Lisa Compute Cluster. This mentions a compute cluster but does not provide specific hardware details such as GPU models, CPU types, or memory.
Software Dependencies No The k-means algorithm, using the implementation of scikit-learn [Pedregosa et al., 2011]... We implement this by feeding each face to the Hopenet-Lite implementation5 of Hopenet [Ruiz et al., 2018]... The paper mentions software such as scikit-learn and Hopenet-Lite but does not specify their version numbers or the versions of other dependencies.
Experiment Setup Yes For training we use a Stochastic Gradient Descent optimizer (initial lr=0.001, exponential decay rate=0.96, 100,000 decay steps) and categorical cross entropy as loss function. We fine-tune the network by replacing the final three fully connected layers with two new fully connected layers. The emotion information (three feature elements) has an influence that is 1.5 times as large as the location information (two feature elements). Therefore, for the baseline experiments, we multiply both coordinate elements by 1.5 to ensure an equal contribution to the feature vector.