Learning Personalized End-to-End Goal-Oriented Dialog
Authors: Liangchen Luo, Wenhao Huang, Qi Zeng, Zaiqing Nie, Xu Sun6794-6801
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on a goal-oriented dialog corpus, the personalized b Ab I dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems. The PERSONALIZED MEMN2N outperforms current state-of-the-art methods with over 7% improvement in terms of per-response accuracy. A test with real human users also illustrates that the proposed model leads to better outcomes, including higher task completion rate and user satisfaction. |
| Researcher Affiliation | Collaboration | Liangchen Luo, , Wenhao Huang, Qi Zeng, Zaiqing Nie, Xu Sun MOE Key Lab of Computational Linguistics, School of EECS, Peking University, Beijing, China Shanghai Discovering Investment, Shanghai, China Alibaba AI Labs, Beijing, China |
| Pseudocode | Yes | Algorithm 1 Response Prediction by PERSONALIZED MEMN2N |
| Open Source Code | No | The paper mentions accessing a dataset from ParlAI (http://parl.ai/), but it does not provide a link or statement about the availability of the source code for their proposed model or methodology. |
| Open Datasets | Yes | The personalized b Ab I dialog dataset (Joshi, Mi, and Faltings 2017) is a multi-turn dialog corpus extended from the b Ab I dialog dataset (Bordes, Boureau, and Weston 2017). [...] We get the dataset released on Parl AI.1 http://parl.ai/ |
| Dataset Splits | No | The paper mentions |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as CPU/GPU models, memory, or cloud specifications. |
| Software Dependencies | No | The paper mentions 'Nesterov accelerated gradient algorithm' and 'Xavier initializer', which are algorithms/methods, but does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The learning rate is 0.001, and the parameter of momentum γ is 0.9. Gradients are clipped to avoid gradient explosion with a threshold of 10. Models are trained in mini-batches with a batch size of 64. The dimensionality of word/profile embeddings is 128. We set the maximum context memory and global memory size (i.e. number of utterances) as 250 and 1000, separately. |