The Graph-based Mutual Attentive Network for Automatic Diagnosis

Authors: Quan Yuan, Jun Chen, Chao Lu, Haifeng Huang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The evaluation conducted on the real EMR documents demonstrates that the proposed model is more accurate compared to the previous sequence learning based diagnosis models.
Researcher Affiliation Industry Quan Yuan , Jun Chen , Chao Lu and Haifeng Huang Baidu Inc, Beijing, China {yuanquan02, chenjun22, luchao, huanghaifeng}@baidu.com
Pseudocode No The paper describes the GMAN model and its components (medical graph construction, GCN encoding, mutual attentive network) in text, but does not provide structured pseudocode or an algorithm block.
Open Source Code No The paper mentions external open-source tools (Jieba, Cli NER) but does not provide an explicit statement or link for the source code of the GMAN model developed in this paper.
Open Datasets Yes For the reproducibility concerns, we choose MIMIC-III-50 [Mullenbach et al., 2018] as the English dataset in the evaluation besides the Chinese datasets. For MIMIC-III-50, we use the same training and testing sets from the original study4. The public English NER for clinical notes, Cli NER5, is used to process MIMIC-III-50, which reports 83.8% F1 score in the original paper [Boag et al., 2018].
Dataset Splits No The paper provides training and testing sample counts in Table 2, but does not explicitly mention validation dataset splits, percentages, or methods.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions software like 'Jieba' and 'Cli NER' but does not specify their version numbers for reproducibility.
Experiment Setup No The paper mentions some configuration details like 'k = 5' for graph pruning, but lacks comprehensive experimental setup details such as specific hyperparameter values (e.g., learning rate, batch size, optimizer) for training the models.