Automatic Emergency Diagnosis with Knowledge-Based Tree Decoding

Authors: Ke Wang, Xuyan Chen, Ning Chen, Ting Chen

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world data from the emergency department of a large-scale hospital indicate that the proposed model outperforms all baselines in both micro-F1 and macro-F1, and reduce the semantic distance dramatically.
Researcher Affiliation Academia 1Institute for Artificial Intelligence, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China 2Tsinghua-Fuzhou Institute of Digital Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China 3Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
Pseudocode No The paper describes the model architecture and processes in detail but does not include a formal pseudocode block or algorithm.
Open Source Code Yes The code for our model is publicly available at https://github.com/ kaisadadi/K-BTD.
Open Datasets No The data from the emergency department of Beijing Tsinghua Changgung Hospital, a large-scale hospital in China, is collected from 2015 to 2017, and it is the first large automatic diagnosis dataset in Chinese.
Dataset Splits Yes In experiment, we conduct random five fold cross-validation to examine the performance of our model as well as baselines.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as CPU or GPU models.
Software Dependencies No The paper mentions using Adam for optimization and pre-trained Tencent AI Lab Embedding Corpus, but does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries.
Experiment Setup Yes In experiment, we set λ to 0.5, ϵ to 0.45 and dp to 256. We adopt Adam [Kingma and Ba, 2015] for optimization. The size of the mini-batch is 64, and the learning rate is 10 5 for BERT-related models and 10 3 otherwise. Dropout is set to 0.5, and weight decay is 10 5.