SuperJunction: Learning-Based Junction Detection for Retinal Image Registration

Authors: Yu Wang, Xiaoye Wang, Zaiwang Gu, Weide Liu, Wee Siong Ng, Weimin Huang, Jun Cheng

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on FIRE dataset show that our method achieves mean area under curve of 0.850, which is 12.6% higher than 0.755 by the state-of-the-art method.
Researcher Affiliation Academia 1Institute for Infocomm Research, Agency for Science, Technology and Research (A*STAR), Singapore 2Department of Mathematics, Harbin Institute of Technology, Weihai, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes All the codes are available at https://github.com/samjcheng/Super Junction.
Open Datasets Yes We use four commonly used datasets containing vessel masks for training, including DRIVE (Staal et al. 2004), STARE (Hoover, Kouznetsova, and Goldbaum 2000), HRF (Budai et al. 2013) and CHASE DB (Fraz et al. 2012). FIRE dataset (Hernandez Matas et al. 2017) 1 is used for registration performance evaluation.
Dataset Splits No The paper states '121 randomly selected images are used for training while the rest images are used for evaluation of keypoint detection,' indicating a training and evaluation set, but does not explicitly mention a distinct validation split.
Hardware Specification Yes The training and testing bed is Ubuntu 18.04 system with two NVidia Geforce RTX 2080Ti graphics cards.
Software Dependencies No The paper states implementation on 'Py Torch platform' but does not provide specific version numbers for PyTorch or other software dependencies.
Experiment Setup Yes We use image size of 1024 1024 when training the models. The network is trained with batch size 4. The optimizer is Adam with an initial learning rate of 0.0001, which is reduced to 0.00001 after 50 epochs. The maximum number of training epoch is set to 150.