IA-GM: A Deep Bidirectional Learning Method for Graph Matching

Authors: Kaixuan Zhao, Shikui Tu, Lei Xu3474-3482

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on challenging datasets demonstrate the effectiveness of our methods for both supervised learning and unsupervised learning. Experiment results including ablation studies demonstrate the effectiveness of our presented components, i.e., the SR-GGNN, learning the similarity between the graphs, learning the fusion of extracted similarity with the GM solution feedback. Results on benchmark datasets indicate that our method outperforms peer methods in both supervised and unsupervised learning.
Researcher Affiliation Academia Kaixuan Zhao, Shikui Tu, Lei Xu Department of Computer Science and Engineering, Shanghai Jiao Tong University Centre for Cognitive Machines and Computational Health (CMa CH), Shanghai Jiao Tong University {zhaokx3, tushikui, leixu}@sjtu.edu.cn
Pseudocode No The paper describes algorithms and methods in prose but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We use PASCAL VOC dataset with keypoints annotation (Bourdev and Malik 2009) for the case of supervised learning. We perform our method on the CMU House/Hotel datasets (House: 111 images and Hotel: 101 images) for comparisons with existing unsupervised learning methods.
Dataset Splits Yes The dataset contains 20 semantic classes with 3510 pairs of annotated examples for training and 841 pairs for testing. This dataset consists of 30 pairs of car images and 20 pairs of motorbike images (60 percent for training and 40 percent for testing), the number of inliers for each pair ranges from 15 to 52.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using "Pretrained VGG16" and concepts like "Sinkhorn normalization" and "Hungarian algorithm", but does not provide specific software names with version numbers for dependencies.
Experiment Setup Yes In practice, we find that the layer number L of the modified GRU computation by Eq.(4) is best set at L = 1... For Eq.(10), we initialize W as 1n n + ε with 1n n being an n n all-one matrix, and set α = 40 to increase the difference between probability values. We set α1 = 0.75 and α2 = 1.25 in Eq.(9)... We set the number of IA-iterations to 4 for VOC datasets. In Eq.(13), technically the temperature parameter τ 0 should be close to zero as much as possible, but in practice, the too-low temperature usually leads to high variations in gradients. Thus, we use τ = 0.05 as default hyperparameter.