Multi-Task Deep Learning for User Intention Understanding in Speech Interaction Systems
Authors: Yishuang Ning, Jia Jia, Zhiyong Wu, Runnan Li, Yongsheng An, Yanfeng Wang, Helen Meng
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on a data set of 135,566 utterances collected from real-world Sogou Voice Assistant illustrate that our method can outperform the comparison methods over 6.9-24.5% in terms of F1-measure. Moreover, a real practice in the Sogou Voice Assistant indicates that our method can improve the performance on user intention understanding by 7%. |
| Researcher Affiliation | Collaboration | Yishuang Ning,1,2 Jia Jia,1,2 Zhiyong Wu,1,2,3, Runnan Li,1,2 Yongsheng An,1 Yanfeng Wang,4 Helen Meng3 1Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China Tsinghua National Laboratory for Information Science and Technology (TNList) 2Tsinghua-CUHK Joint Research Center for Media Sciences, Technologies and Systems, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, China 3Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, China 4Beijing Sougou Science and Technology Development Co., Ltd. |
| Pseudocode | No | The paper does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | No explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The raw data set includes 135,566 Mandarin utterances provided by Sogou Voice Assistant. This is a proprietary dataset and no public access information (link, DOI, specific citation for public version) is provided. |
| Dataset Splits | Yes | All the results reported in this paper were based on 5-fold cross validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions machine learning models (e.g., LSTM, BN, SVM, CRF) but does not provide specific version numbers for any software libraries or dependencies used in implementation. |
| Experiment Setup | No | The paper describes the overall model architecture and features, but lacks specific details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings for the deep learning components. |