Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Dual-View Learning for Conversational Emotion Recognition Through Context and Emotion-Shift Modeling

Authors: Xupeng Zha, Huan Zhao, Guanghui Ye, Zixing Zhang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate DVL-CER through extensive experiments on two widely-used datasets, IEMOCAP and Emory NLP. The results demonstrate that DVL-CER achieves state-of-the-art performance, delivering robust and high-quality emotion distributions compared with existing CER methods and other dualview learning strategies.
Researcher Affiliation Academia Xupeng Zha, Huan Zhao*, Guanghui Ye, Zixing Zhang College of Computer Science and Electronic Engineering, Hunan University, China EMAIL
Pseudocode No The paper describes the methodology using textual explanations and mathematical formulations, and provides a block diagram (Figure 2). It does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code, nor does it include a link to a code repository. It mentions '1Appendices are available at https://drive.google.com/file/d/1pRNxKGoDQVgNy6GGpiC6XgyJzRzoV00O/view' but this is for appendices, not specifically for code.
Open Datasets Yes We evaluate our DVL-CER approach using two benchmark datasets: IEMOCAP (Busso et al. 2008) and Emory NLP (Zahiri and Choi 2018).
Dataset Splits Yes Table 1: Statistics of the datasets. Datasets #Dialogues #Utterances #Classes train val test train val test IEMOCAP 120 31 4,810 1,000 1,623 6 Emory NLP 713 99 85 9,934 1,344 1,328 7
Hardware Specification No The paper states: "Additional experimental details are available in Appendix C." However, the main text does not contain any specific hardware details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions models and optimizers like "RoBERTa Large model (Liu et al. 2019)" and "Adam: A method for stochastic optimization (Kingma and Ba 2014)", and architectural components like "Transformer architecture", but it does not specify any software libraries or packages with version numbers used for implementation.
Experiment Setup No The paper mentions that "λ1 and λ2 are hyperparameters that balance the tradeoff between independence (LCCM and LESM) and correlation (Lθ MSE and Lψ MSE)" and "Additional experimental details are available in Appendix C." However, specific numerical values for these or other hyperparameters (e.g., learning rate, batch size, number of epochs) are not provided in the main text.