Proxy Graph Matching with Proximal Matching Networks

Authors: Hao-Ru Tan, Chuang Wang, Si-Tong Wu, Tie-Qiang Wang, Xu-Yao Zhang, Cheng-Lin Liu9808-9815

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To justify our approach, we provide a convergence guarantee for the proximal method for graph matching. The overall performance is validated by numerical experiments. In particular, our approach is trained on the synthetic random graphs and then applied to several real-world datasets. The experimental results demonstrate that our method is robust to rotational transform and highlights its strong performance of matching accuracy.
Researcher Affiliation Academia 1National Laboratory of Pattern Recognition, Institute of Automation of Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3CAS Center for Excellence of Brain Science and Intelligence Technology
Pseudocode Yes Algorithm 1 : Differentiable Proximal Graph Matching Input: node affinity vector u, edge affinity matrix P , a sequence of stepsize {β0, β1, β2, ....}, and a maximum iteration T. Initialization: z0 = Sinkhorn(u), t = 0. for t = 0 to T 1 do ezt+1 = exp h βt 1+βt (u + P zt) + 1 1+βt log(zt) i zt+1 = Sinkhorn(ezt+1) end for Output z T
Open Source Code No The paper does not provide an explicit statement about releasing code or a link to a code repository.
Open Datasets Yes Willow-Object dataset (Cho, Alahari, and Ponce. 2013) PASCAL-PF dataset (Cho, Alahari, and Ponce. 2013) CMU House Sequence dataset
Dataset Splits No The paper mentions training data generation details for synthetic graphs, but it does not provide specific split percentages or counts for training/validation/test sets for the real-world datasets used (Willow-Object, PASCAL-PF, CMU House Sequence), nor does it reference standard splits for these datasets.
Hardware Specification Yes The computing platform is Intel CPU E5-2650 v4, 256G RAM with 8 Titian X 12GB GPUs. Only a single GPU is used when we run experiments.
Software Dependencies No All methods are implemented in the deep learning package Py Torch (Paszke, Gross, and et al 2017). This mentions PyTorch but does not specify a version number for the software.
Experiment Setup Yes We keep the maximum iteration of DPGM to be 5 for both training and testing stages. By following (Zheng et al. 2015), we consider the proximal operator coefficient β as an additional learnable parameter besides the parameters θGNN in the first local GNN module. Finally, all methods are implemented in the deep learning package Py Torch (Paszke, Gross, and et al 2017). We choose the default temperature γ = 1.0 in the training stage of all experiments.