Adaptive Edge Attention for Graph Matching with Outliers

Authors: Jingwei Qu, Haibin Ling, Chenrui Zhang, Xiaoqing Lyu, Zhi Tang

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that EAGM achieves promising matching quality compared with state-of-the-arts, on cases both with and without outliers. Our source code along with the experiments is available at https://github.com/bestwei/EAGM.
Researcher Affiliation Academia 1Wangxuan Institute of Computer Technology, Peking University, Beijing, China 2Department of Computer Science, Stony Brook University, Stony Brook, NY 11794 USA
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Our source code along with the experiments is available at https://github.com/bestwei/EAGM.
Open Datasets Yes The experiments are performed on two cases with and without outliers, including three benchmarks for keypoint matching: Pascal VOC [Everingham et al., 2010] with Berkeley annotations [Bourdev and Malik, 2009], Willow Object [Cho et al., 2013], and CMU House Sequence [Caetano et al., 2006].
Dataset Splits No The paper specifies training and testing sets but does not explicitly mention a separate validation set or its split details.
Hardware Specification Yes All experiments are run on a single GTX-1080Ti GPU, and around 25 image pairs are processed per second.
Software Dependencies No The paper mentions using ADAM optimizer, VGG16 network, and ImageNet, but does not provide specific version numbers for any software libraries or dependencies (e.g., Python version, PyTorch version, CUDA version).
Experiment Setup Yes For all experiments, optimization is achieved via ADAM optimizer [Kingma and Ba, 2015] with initial learning rate 1 × 10−3, and exponential decaying 2% per 2000 iterations. ... We empirically set the number of convolutional layers l1 = 3 and l2 = 10 in the edge attention module and classification module respectively. The weights in Eq. 12 are set as λe = λc = 0.1 during training.