A Personalized Affective Memory Model for Improving Emotion Recognition
Authors: Pablo Barros, German Parisi, Stefan Wermter
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model in two steps: first, how well the model can learn a general representation of emotion expressions, using the Affect Net (Mollahosseini et al., 2017) dataset. And second, how the novel personalized affective memory performs when recognizing emotion expressions of specific persons using the continuous expressions on the OMG-Emotion dataset (Barros et al., 2018). We compare our model with state-of-the-art solutions on both datasets. Furthermore, our experiments include ablation studies and neural visualizations in order to explain the behavior of our model. |
| Researcher Affiliation | Collaboration | Pablo Barros 1 German I. Parisi 1 2 Stefan Wermter 1 ... 1Knowledge Technology, Department of Informatics, University of Hamburg, Germany 2Apprente Inc, Montain View, CA, USA. |
| Pseudocode | No | The paper describes the model architecture and training process in textual descriptions and diagrams, but it does not include any explicit pseudocode blocks or algorithms. |
| Open Source Code | No | The paper mentions the "OMG-Emotion website 1. https://bit.ly/2XUJBjq" which is a link to a dataset, but it does not provide any link or explicit statement about the availability of the source code for the methodology described in the paper. |
| Open Datasets | Yes | We evaluate our model in two steps: first, how well the model can learn a general representation of emotion expressions, using the Affect Net (Mollahosseini et al., 2017) dataset. ... And second, how the novel personalized affective memory performs when recognizing emotion expressions of specific persons using the continuous expressions on the OMG-Emotion dataset (Barros et al., 2018). |
| Dataset Splits | Yes | The Affect Net dataset ... has its own separation between training, testing and validation samples. ... We train the PK in each of these experiments with the training subset of the Affect Net corpus and evaluate it using the validation subset. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions the Adam Optimizer and references the VGG Face model and AlexNet, but it does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | The final coefficient values are as follows: λ1 = 1, λ2 = 0.02, λ3 = 0.3, λ4 = 0.01 and λ5 = 0.01. The batch size is 48. The normal distribution with mean 0 and standard deviation 0.02 is employed for the initialization of the weights of all layers. All biases are initially set to 0. For optimization, the Adam Optimizer with learning rate α = 0.0002, β1 = 0.5 and β2 = 0.999 is employed. ... Table 1. Training parameters of each affective memory. Parameter Value Epochs 10 Activity threshold (ta) 0.4 Habituation threshold (th) 0.2 Habituation modulation (τi and κ) 0.087, 0.032 Labeling factor (γ) 0.4 |