CauAIN: Causal Aware Interaction Network for Emotion Recognition in Conversations

Authors: Weixiang Zhao, Yanyan Zhao, Xin Lu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three benchmark datasets show that our model achieves better performance over most baseline models.
Researcher Affiliation Academia Weixiang Zhao , Yanyan Zhao , Xin Lu Harbin Institute of Technology, China {wxzhao, yyzhao, xlu}@ir.hit.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets Yes We conduct experiments on three benchmark datasets from IEMOCAP [Busso et al., 2008], Daily Dialog [Li et al., 2017] and MELD [Poria et al., 2019b].
Dataset Splits Yes Dataset Dialogues Train Val Test Train Val Test IEMOCAP 120 31 5,810 1,623 Daily Dialog 11,118 1,000 1,000 87,170 8,069 7,740 MELD 1,039 114 280 9,989 1,109 2,610
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'RoBERTa Large model' and 'Adam optimizer' but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes For utterance-level feature extraction, we fine-tune Ro BERTa Large model for a batch size of 32 and Adam optimizer is adopted with learning rate of 1e-5. Thus, the dimension of utterance-level feature vector dm is 1024. For all representations in the following parts of Cau AIN, dh is set to 300. We train Cau AIN with Adam optimizer in a learning rate of 1e-4.