Attention-Informed Mixed-Language Training for Zero-Shot Cross-Lingual Task-Oriented Dialogue Systems

Authors: Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, Pascale Fung8433-8440

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental intensive experiments with different cross-lingual embeddings demonstrate the effectiveness of our approach.
Researcher Affiliation Academia Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, Pascale Fung Center for Artificial Intelligence Research (CAi RE) The Hong Kong University of Science and Technology {zliucr, giwinata, zlinao, pxuab}@connect.ust.hk, pascale@ece.ust.hk
Pseudocode No The paper describes the proposed method using text and diagrams (Figure 1, Figure 2, Figure 3), but it does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code Yes The code is available at: https://github.com/zliucr/mixedlanguage-training
Open Datasets Yes Wizard of Oz (WOZ), a restaurant domain dataset, is used for training and evaluating dialogue state tracking models on English. It was enlarged into WOZ 2.0 by adding more dialogues, and recently, Mrkˇsi c et al. (2017b) expanded WOZ 2.0 into Multilingual WOZ 2.0 by including two more languages (German and Italian). ... Recently, a multilingual task-oriented natural language understanding dialogue dataset was proposed by Schuster et al. (2019), which contains English, Spanish, and Thai across three domains (alarm, reminder, and weather).
Dataset Splits Yes Multilingual WOZ 2.0 contains 1200 dialogues for each language, where 600 dialogues are used for training, 200 for validation, and 400 for testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper discusses training settings and baselines but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed system-level training configurations.