Converse, Focus and Guess – Towards Multi-Document Driven Dialogue
Authors: Han Liu, Caixia Yuan, Xiaojie Wang, Yushu Yang, Huixing Jiang, Zhongyuan Wang13380-13387
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our method significantly outperforms several strong baseline methods and is very close to human s performance. |
| Researcher Affiliation | Collaboration | 1Beijing University of Posts and Telecommunications, Beijing, China 2Meituan, Beijing, China |
| Pseudocode | No | The paper describes methods in text and with diagrams but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1https://github.com/laddie132/MD3 |
| Open Datasets | Yes | We build a benchmark Guess Movie dataset for MD3 task on the base of the dataset Wiki Movies (Miller et al. 2016). |
| Dataset Splits | Yes | We divide Guess Movie into two disjoint parts. The 70% part is used for pre-training document representation and NLU module, and the remaining 30% is used for training MD3 with 50k simulations using REINFORCE (Williams 1992) algorithm. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running experiments. |
| Software Dependencies | No | The paper mentions Adam optimizer, GloVe word embedding, and BERT but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We use Adam(Kingma and Ba 2014) optimizer with learning rate 0.001 and Glo Ve (Pennington, Socher, and Manning 2014) word embedding. The number of candidate documents for document representation and dialogue is 32 by default. The maximum number of turns is 5. The probability threshold K for whether performing a Guess action is 0.5. |