Deep Multi-Task Learning for Diabetic Retinopathy Grading in Fundus Images
Authors: Xiaofei Wang, Mai Xu, Jicong Zhang, Lai Jiang, Liu Li2826-2834
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that our Deep MT-DR method significantly outperforms other stateof-the-art methods for DR grading over two public datasets. In addition, our method achieves comparable performance in two auxiliary tasks of ISR and lesion segmentation. and Experiments Implementation Details, Evaluation on DR Grading, Evaluation on Auxiliary Tasks, Ablation Study. |
| Researcher Affiliation | Academia | Xiaofei Wang,1 Mai Xu,1* Jicong Zhang,2* Lai Jiang,1 Liu Li 3 1School of Electronic and Information Engineering, Beihang University, Beijing, China 2School of Biological Science and Medical Engineering, Beihang University, Beijing, China 3Department of Computing, Imperial College London, London, UK |
| Pseudocode | Yes | Algorithm 1: Gradient-weighted feature combination in GMSV. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | In our experiments, we evaluate the performance of our Deep MT-DR method on two public DR datasets, i.e., the DDR dataset (Li et al. 2019b) and Eye PACS dataset (Graham 2015). |
| Dataset Splits | Yes | We use the default data split setting of these two datasets. and Note that all hyper-parameters are tuned over the validation set. |
| Hardware Specification | Yes | All experiments are conducted on a computer with an Intel(R) Core(TM) i7-4770 CPU@3.40GHz, 32GB RAM and 4 Nvidia Ge Force GTX 1080 Ti GPUs. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not provide specific version numbers for programming languages, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | The values of key hyper-parameters in the training stages are listed in Table 1. and Besides, in both stages, the parameters are updated using the Adam (Kingma and Ba 2014) optimizer, together with the weight decay. |