Knowledge Bridging for Empathetic Dialogue Generation

Authors: Qintong Li, Piji Li, Zhaochun Ren, Pengjie Ren, Zhumin Chen10993-11001

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on a benchmark dataset verify the effectiveness of the proposed method.
Researcher Affiliation Collaboration 1School of Computer Science and Technology, Shandong University, Qingdao, China 2Tencent AI Lab, Shenzhen, China 3Department of Computer Science, The University of Hong Kong, Hong Kong SAR, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes 1Code and dataset are available at http://github.com/qtli/KEMP.
Open Datasets Yes We conduct our experiments on the EMPATHETICDIALOGUES dataset (Rashkin et al. 2019).
Dataset Splits Yes Then we obtain 17,802 dialogues in the training set, 2,628 in the validation set, and 2,494 in the testing set.
Hardware Specification Yes We implemented all models in Py Torch (Paszke et al. 2017) with a single Tesla V100 GPU
Software Dependencies No The paper mentions 'Py Torch (Paszke et al. 2017)' but does not specify a version number for it or other software dependencies.
Experiment Setup Yes All common hyperparameters are the same as the work in (Li et al. 2020). The maximum introducing numbers of external concepts per dialogue and per token are set as 10 and 5, respectively. The threshold α used in emotional context graph construction is 0.1. Loss weights γ1, γ2, γ3 are set to 1, 1, and 0.1, respectively. We varied the learning rate during training following Vaswani et al. (2017). Early stopping is applied when training. When inference, we set the maximum decoding step as 30.