End-to-End Knowledge-Routed Relational Dialogue System for Automatic Diagnosis
Authors: Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, Liang Lin7346-7353
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a public medical dialogue dataset show our KR-DS significantly beats stateof-the-art methods (by more than 8% in diagnosis accuracy). |
| Researcher Affiliation | Collaboration | 1Sun Yat-Sen University, 2Dark Matter AI Inc. , 3Soochow University |
| Pseudocode | No | The paper describes the method using text and equations but does not include a dedicated pseudocode block or algorithm figure. |
| Open Source Code | Yes | The source code will be released together with our DX dataset. |
| Open Datasets | Yes | We build a newly DX dataset for medical dialogue system, reserving the original self-reports and interaction utterances between doctors and patients. We collected data from a Chinese online health-care community (dxy.com)... The source code will be released together with our DX dataset. ... MZ dataset in this paper. The MZ dataset contains 710 user goals and 66 symptoms, covering 4 types of diseases. ...The results of the above three baselines are provided by (Wei et al. 2018). |
| Dataset Splits | No | For the DX dataset, 'We selected 423 dialogues for training and conducted inference on another 104 dialogues.' For the MZ dataset, it mentions 'using the provided train/test sets.' There is no explicit mention of a validation split percentage or count for either dataset. |
| Hardware Specification | No | The paper states 'We implement the system on Pytorch' but does not provide any specific hardware details such as GPU or CPU models, processor types, or memory amounts used for experiments. |
| Software Dependencies | No | The paper states 'We implement the system on Pytorch' but does not specify the version number of PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We implement the system on Pytorch. To train the DQN composed of a two-layer neural network, the ϵ of ϵ-greedy strategy is set to 0.1 for effective action space exploration and the γ in Bellman equation is 0.9. The initial buffer size D is 10000 and the batch size is 32. The learning rate is 0.01. We choose SGD as the optimizer and 100 simulation dialogues will add to experience replay pool at each epoch training. Generally, we train the models for about 300 epochs. |