H-Net: Neural Network for Cross-domain Image Patch Matching

Authors: Weiquan Liu, Xuelun Shen, Cheng Wang, Zhihong Zhang, Chenglu Wen, Jonathan Li

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the proposed H-Net and H-Net++ outperform the existing algorithms.
Researcher Affiliation Academia 1Fujian Key Laboratory of Sensing and Computing for Smart City, School of Information Science and Engineering, Xiamen University, Xiamen, China 2Software School, Xiamen University, Xiamen, China 3Department of Geography and Environmental Management, University of Waterloo, Waterloo, Canada
Pseudocode No The paper does not contain any explicit pseudocode or algorithm blocks.
Open Source Code Yes Our code and cross-domain image dataset are available at https://github.com/Xylon-Sean/H-Net.
Open Datasets Yes Our code and cross-domain image dataset are available at https://github.com/Xylon-Sean/H-Net.
Dataset Splits No we selected 160,000 pairs of cross-domain image patches as training data, in which 80,000 pairs of patches are matching and 80,000 pairs patches are non-matching. For testing, we used the remaining 40,000 pairs of cross-domain image patches as the testing data, of which, 20,000 pairs of patches are matching and 20,000 pairs of patches are non-matching. The paper does not explicitly mention a validation split.
Hardware Specification Yes All the experiments were performed on a NVIDIA Tesla P100.
Software Dependencies No The paper mentions 'Tensorflow' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes Our models were trained by using Adaptive Moment Estimation (Adam) Optimizer. The learning rate starts at 0.001 and decays 0.9 for each epoch.