TITAN : Task-oriented Dialogues with Mixed-Initiative Interactions

Authors: Sitong Yan, Shengli Song, Jingyang Li, Shiqi Meng, Guangneng Hu

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we construct a multi-domain task-oriented dialogue dataset with mixed-initiative strategies TITAN from the large-scale dialogue corpus Multi WOZ 2.1. It contains a total of 1,800 human-human conversations where the system can either ask clarification questions actively or provides relevant information to address failure situations and implicit user requests. We report the results of several baseline models on system response generation and dialogue act prediction to assess the performance of SOTA methods on TITAN.
Researcher Affiliation Academia School of Computer Science and Technology Xidian University, Xi an, China
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes 1We release TITAN dataset and code for evaluation at https://github.com/styan XDU/TITAN-evaluation-master
Open Datasets Yes 1We release TITAN dataset and code for evaluation at https://github.com/styan XDU/TITAN-evaluation-master
Dataset Splits Yes To further explore the performance of baseline models, we split TITAN into training, dev, and test sets (shown in Table 2). Train Dev Test Total Convs 1,440 180 180 1,800... Table 2: Overall statistic of TITAN. Training, dev, and test sets are splited with 8:1:1 .
Hardware Specification Yes A single NVIDIA Ge Force 3080Ti GPU with 16GB memory is used during the training and testing for both tasks.
Software Dependencies No The paper mentions models like GPT-2 and BERT, but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper describes the general experimental process and hardware used, but lacks specific details on hyperparameters such as learning rate, batch size, or optimizer settings.