Knowledge-Driven Encode, Retrieve, Paraphrase for Medical Image Report Generation
Authors: Christy Y. Li, Xiaodan Liang, Zhiting Hu, Eric P. Xing6666-6673
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on two medical image report dataset (Demner-Fushman et al. 2015). Our KERP achieves the state-of-the-art performance on both datasets under both automatic evaluation metrics and human evaluation. |
| Researcher Affiliation | Collaboration | Christy Y. Li, 1Duke University, 2Carnegie Mellon University, 3Petuum, Inc yl558@duke.edu, {xiaodan1,zhitingh}@cs.cmu.edu, eric.xing@petuum.com. |
| Pseudocode | No | The paper describes its algorithms textually and with diagrams but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statement about open-sourcing the code or a link to a code repository. |
| Open Datasets | Yes | First, Indiana University Chest X-Ray Collection (IU X-Ray) (Demner-Fushman et al. 2015) is a public dataset consisting of 7,470 chest x-ray images paired with their corresponding diagnostic reports. |
| Dataset Splits | Yes | On both dataset, we randomly split the data by patients into training, validation and testing by a ratio of 7:1:2. |
| Hardware Specification | No | The paper does not specify the exact hardware components (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a Dense Net but does not provide specific version numbers for any software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | We use learning rate 1e-3 for training and 1e-5 for fine-tuning, and reduce by 10 times when encountering validation performance plateau. We use early stopping, batch size 4 and drop out rate 0.1 for all training. |