DialogueRNN: An Attentive RNN for Emotion Detection in Conversations
Authors: Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, Erik Cambria6818-6825
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model outperforms the state-of-the-art by a significant margin on two different datasets. |
| Researcher Affiliation | Academia | Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico School of Computer Science and Engineering, Nanyang Technological University, Singapore ΦSchool of Computing, National University of Singapore, Singapore Computer Science & Engineering, University of Michigan, Ann Arbor, USA |
| Pseudocode | No | The paper includes diagrams and mathematical equations to describe the model, but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Implementation available at https://github.com/senticnet/conv-emotion |
| Open Datasets | Yes | We use two emotion detection datasets IEMOCAP (Busso et al. 2008) and AVEC (Schuller et al. 2012) to evaluate Dialogue RNN. |
| Dataset Splits | Yes | We partition both datasets into train and test sets with roughly 80/20 ratio such that the partitions do not share any speaker. Table 1 shows the distribution of train and test samples for both dataset. Dataset Partition Utterance Dialogue Count Count IEMOCAP train + val 5810 120 test 1623 31 AVEC train + val 4368 63 test 1430 32 |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using GRU cells and the Adam optimizer, and tools like openSMILE, but it does not specify concrete version numbers for any software libraries or dependencies. |
| Experiment Setup | No | The paper states "Hyperparameters are optimized using grid search (values are added to the supplementary material)" and describes the loss function and optimizer, but does not provide specific hyperparameter values or detailed training configurations in the main text. |