Pathological Evidence Exploration in Deep Retinal Image Diagnosis
Authors: Yuhao Niu, Lin Gu, Feng Lu, Feifan Lv, Zongji Wang, Imari Sato, Zijian Zhang, Yangyan Xiao, Xunzhang Dai, Tingting Cheng1093-1101
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As verified by a panel of 5 licensed ophthalmologists, our synthesized images carry the symptoms that are directly related to diabetic retinopathy diagnosis. The panel survey also shows that our generated images is both qualitatively and quantitatively superior to existing methods. (Abstract) and Quantitative Comparison To further strengthen our method, we organized a peer review by a board of 5 professional ophthalmologists... The p value of T-test is 9.80e-10 and 7.95e-5 for fundus and lesion realness respectively that our mean score is higher than Fila-s GAN. (Experiment Results section). |
| Researcher Affiliation | Academia | 1State Key Laboratory of VR Technology and Systems, School of CSE, Beihang University, Beijing, China 2Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China 3Peng Cheng Laboratory, Shenzhen, China 4National Institute of Informatics, Japan 5Xiangya Hospital Central South University, China 6The Second Xiangya Hospital of Central South University, China |
| Pseudocode | No | The paper describes the network architecture and procedures in detail, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links to source code repositories, nor does it explicitly state that the code for their methodology is being made open source or publicly available. |
| Open Datasets | Yes | In this paper, we select three datasets: DRIVE (Staal et al. 2004), STARE (Hoover, Kouznetsova, and Goldbaum 2000) and Kaggle (Kaggle 2016). |
| Dataset Splits | No | The paper mentions DRIVE contains 20 training images and 20 test images and The Kaggle dataset contains 53576 training images and 35118 test images, but it does not specify an explicit validation data split, proportions, or sample counts for validation. |
| Hardware Specification | Yes | All the experiments are tested out on a sever with Intel Xeon E5-2643 CPU, 256GB memory and Titan-Xp GPU. |
| Software Dependencies | No | The paper mentions using Tensorflow for auto-differentiation and refers to VGG-19 and ADAM optimizer, but it does not provide specific version numbers for any software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | The chosen norm in above equations is L1. Weights for different losses are wdd = 1, wtv = 100, wdp = 10, wmv = 5wtv, wgram = 106... The batch size is set to 1... The training is done using the ADAM optimizer (Kingma and Ba 2014) and the learning rate is set to 0.0002 for the generator and 0.0001 for the discriminator... The training finishes after 20000 mini-batches. |