Contrastive General Graph Matching with Adaptive Augmentation Sampling

Authors: Jianyuan Bo, Yuan Fang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically evaluate the proposed model GCGM and Bi AS. ... We tested three real-world datasets. ... We assessed the performance of GCGM against diverse baselines, including supervised, learning-free, and unsupervised methods.
Researcher Affiliation Academia Jianyuan Bo , Yuan Fang Singapore Management University, Singapore {jybo.2020, yfang}@smu.edu.sg
Pseudocode Yes Pseudocode of our method can be found in Appendix A.
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the described methodology or a link to a code repository.
Open Datasets Yes We tested three real-world datasets. (1) Pascal VOC [Bourdev and Malik, 2009; Everingham et al., 2010] includes images from 20 classes; (2) Willow [Cho et al., 2013] offers 256 images over five classes; (3) SPair-71k [Min et al., 2019] has 70,958 image pairs across 18 classes. Besides, we followed a recent work [Liu et al., 2023] to generate a synthetic dataset from random 2D node coordinates for the general non-visual domain. All graphs are constructed based on Delaunay triangulation. More dataset details are presented in Appendix C.
Dataset Splits Yes Our reported results for supervised methods and SCGM might be slightly lower than their original papers due to our 80:20 train-validation split from the original training set, as the original splits lack a validation set. We repeated the splits five times using varied random seeds.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software like 'Think Match' and 'Optuna' but does not specify their version numbers or other software dependencies with versions.
Experiment Setup Yes For Bi AS, we set λ = 0.8, α = 3, |P| = 512. However, for the Willow dataset, due to its smaller size, we adjust |P| to 128. Early stopping was applied if performance improvements were below the threshold ϵ = 0.001. Detailed model and parameter configurations can be found in Appendix A.