MALA: Cross-Domain Dialogue Generation with Action Learning
Authors: Xinting Huang, Jianzhong Qi, Yu Sun, Rui Zhang7977-7984
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments using multi-domain datasets, SMD and Multi WOZ, show that our proposed model achieves consistent improvements over the baselines models in terms of both task completion and language quality. |
| Researcher Affiliation | Collaboration | 1The University of Melbourne, 2Twitter Inc. |
| Pseudocode | No | The paper describes its method using textual descriptions and mathematical formulations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We use two multi-domain human-human conversational datasets: (1) SMD dataset (Eric and Manning 2017) contains 2425 dialogues, and has three domains: calendar, weather, navigation; (2) MULTIWOZ dataset (Budzianowski et al. 2018) is the largest existing taskoriented corpus spanning over seven domains. |
| Dataset Splits | Yes | We use the separation of training, validation and testing data as original SMD and MULTIWOZ dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'three-layer transformer' and 'VQ-VAE' as components but does not provide specific version numbers for any software, libraries, or frameworks used in the implementation or experimentation. |
| Experiment Setup | No | The paper mentions some architectural details for the base model, such as 'a three-layer transformer... with a hidden size of 128 and 4 heads', but does not provide specific hyperparameters like learning rate, batch size, or number of epochs, nor a detailed training configuration. |