Enhancing Dialog Coherence with Event Graph Grounded Content Planning
Authors: Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results indicate the effectiveness of this framework in terms of dialog coherence and informativeness. ... As shown in Table 1, EGRL significantly outperforms all baselines in terms of all the metrics except for length-of-dialog (sign test, p-value < 0.01). |
| Researcher Affiliation | Collaboration | 1Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, Harbin, China 2Baidu Inc., Beijing, China |
| Pseudocode | Yes | Algorithm 1 Event extraction from each story sentence |
| Open Source Code | No | The paper refers to source codes for baselines (CCM, CMR, La RL) but does not provide a link or explicit statement for the open-source release of the authors' own code (EGRL). |
| Open Datasets | Yes | Weibo Corpus. [Shang et al., 2015] ... Twitter Corpus. [Ritter et al., 2011] ... Narrative Event Graph. The ROCStories corpus contains 98,161 five-sentence stories. |
| Dataset Splits | Yes | Weibo Corpus. [Shang et al., 2015] The Weibo corpus contains 2.6M message-response pairs for training, 10k pairs for validation and 10k pairs for test. Twitter Corpus. [Ritter et al., 2011] The corpus contains 1.3M dialogs for training, 10k for validation and 10k for test. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions methods and models like Adam optimizer, Transformers, RNN decoder, and Bi LSTM, but does not provide specific version numbers for any software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | The vocab size is 50000 and the dimension of all the representations is set to 512. Dropout rate is 0.3. The optimizer adopts Adam and the learning rate is set to 0.002. The discounting weight for reward is 0.95. ... we set maximum number of dialog turns to 8 |