Self-Supervised Bidirectional Learning for Graph Matching

Authors: Wenqi Guo, Lin Zhang, Shikui Tu, Lei Xu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments deliver superior performance over the previous state-of-the-arts on five real-world benchmarks, especially under the more difficult outlier scenarios, demonstrating the effectiveness of our method.
Researcher Affiliation Academia Department of Computer Science and Engineering, Shanghai Jiao Tong University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/CMACH508/IA-SSGM.
Open Datasets Yes We verify our method on datasets including PASCALVOC with Berkeley annotation (Everingham et al. 2010; Bourdev and Malik 2009), WILLOW-OBJECTCLASS (Cho, Alahari, and Ponce 2013), CMU (Belongie, Malik, and Puzicha 2002), CUB2011 (Wah et al. 2011), and IMC-PTSPARSEGM (Jin et al. 2021; Wang et al. 2021).
Dataset Splits Yes In line with (Wang et al. 2021), we use Reichstag, Sacre Coeur, and St. Peters Square as the testing set and the rest 13 tourism attractions as the training set.
Hardware Specification Yes Experiments run on Intel(R) Xeon(R) Gold 6226R CPU (2.90GHz) and one Nvidia A100 (40G) GPU.
Software Dependencies No Our method is implemented in PYTORCH, using PYTORCH GEOMETRIC (Fey and Lenssen 2019) and PYGCL (Zhu et al. 2021) libraries. The specific version numbers for PyTorch, PyTorch Geometric, or PyGCL are not provided.
Experiment Setup Yes For all experiments, optimization is done via ADAM (Da 2014) with decaying learning rate. ... We set α = β = tanh(m/5), γ = 1 tanh(m/5), where m denotes the learning epoch. When the loss change is smaller than 0.001 among 3 epochs, which means the predictive result of our model converges for each pair graphs, we stop the training. ... We adopt SPLINECNN (Fey et al. 2018) as our GNN encoder with trainable B-spline kernel function... The GNN encoder is implemented by stacking two layers of the GIN operator (Xu et al. 2018). ... We stack one layer SPLINECNN (Fey et al. 2018) and one layer GIN (Xu et al. 2018) as GNN encoder.