Knowledge Graph Grounded Goal Planning for Open-Domain Conversation Generation
Authors: Jun Xu, Haifeng Wang, Zhengyu Niu, Hua Wu, Wanxiang Che9338-9345
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our model outperforms state of the art baselines in terms of user-interest consistency, dialog coherence, and knowledge accuracy. Evaluations against both user simulator and human subjects demonstrate the effectiveness of Know HRL in terms of user-interest consistency, dialog coherence, and knowledge accuracy, when compared with state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | Jun Xu,1 Haifeng Wang,2 Zhengyu Niu,2 Hua Wu,2 Wanxiang Che1 1Harbin Institute of Technology, Harbin, China 2Baidu Inc., Beijing, China |
| Pseudocode | No | The paper describes the model architecture and processes using text and diagrams (Figure 1 and Figure 2), but it does not contain any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an explicit statement about releasing the source code for the described methodology or provide a direct link to a code repository. |
| Open Datasets | Yes | We use a publicly available knowledge-driven dialog dataset, Du Conv7, for pretraining of the multi-mapping based generator, baselines and the user simulator. More details at https://arxiv.org/abs/1906.05572 |
| Dataset Splits | Yes | We split it into training set (100k-turn), development set (10k-turn) and test set (10k-turn). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper describes the use of various model components and methods (e.g., RNN encoders, MLP networks, LSTM units, A2C method, TransE) but does not provide specific software dependencies with version numbers (e.g., programming language versions, library versions like PyTorch or TensorFlow). |
| Experiment Setup | Yes | MLPs are two-layer fully connected perceptron, with hidden layer size as 512 . And the number of MLPs NLr = 10. In our experiment6, {α}5 1 are set as 1, 5, 1, 5000, 0.5; β1 equals to 1 and β2 is 0.5; {φ}3 1 are set as 1, 1, 2. we use the user simulator to play the role of human. Then we let each of the models to be evaluated generate the first utterance and chat with the user simulator, till they reach the maximum number of turns (which is set as 7 turns in this work). we setup human evaluation interfaces and ask human annotators to converse with each of the models till they reach the maximum number (set as 10) of turns. |