Deep Graph Matching Consensus
Authors: Matthias Fey, Jan E. Lenssen, Christopher Morris, Jonathan Masci, Nils M. Kriege
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify our method on three different tasks. We first show the benefits of our approach in an ablation study on synthetic graphs (Section 4.1), and apply it to the real-world tasks of supervised keypoint matching in natural images (Sections 4.2 and 4.3) and semi-supervised cross-lingual knowledge graph alignment (Section 4.4) afterwards. All dataset statistics can be found in Appendix H. Our method is implemented in PYTORCH (Paszke et al., 2017) using the PYTORCH GEOMETRIC (Fey & Lenssen, 2019) and the KEOPS (Charlier et al., 2019) libraries. |
| Researcher Affiliation | Collaboration | Matthias Fey1,3 Jan E. Lenssen1,4 Christopher Morris1 Jonathan Masci2 Nils M. Kriege1 1TU Dortmund University Dortmund, Germany 2NNAISENSE Lugano, Switzerland |
| Pseudocode | Yes | Algorithm 1 Optimized graph matching consensus algorithm |
| Open Source Code | Yes | Our source code is available under https://github.com/rusty1s/ deep-graph-matching-consensus. |
| Open Datasets | Yes | We perform experiments on the PASCALVOC (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009) and WILLOW-OBJECTCLASS (Cho et al., 2013) datasets which contain sets of image categories with labeled keypoint locations. ... We evaluate our model on the DBP15K datasets (Sun et al., 2017) which link entities of the Chinese, Japanese and French knowledge graphs of DBPEDIA into the English version and vice versa. |
| Dataset Splits | Yes | For PASCALVOC, we follow the experimental setups of Zanfir & Sminchisescu (2018) and Wang et al. (2019b) and use the training and test splits provided by Choy et al. (2016). We pre-filter the dataset to exclude difficult, occluded and truncated objects, and require examples to have at least one keypoint, resulting in 6 953 and 1 671 annotated images for training and testing, respectively. |
| Hardware Specification | No | Our implementation can process sparse mini-batches with parallel GPU acceleration and minimal memory footprint in all algorithm steps. No specific hardware details such as GPU or CPU models were provided. |
| Software Dependencies | No | Our method is implemented in PYTORCH (Paszke et al., 2017) using the PYTORCH GEOMETRIC (Fey & Lenssen, 2019) and the KEOPS (Charlier et al., 2019) libraries. No specific version numbers for these software components were provided. |
| Experiment Setup | Yes | For all experiments, optimization is done via ADAM (Kingma & Ba, 2015) with a fixed learning rate of 10 3. ... The number of layers and hidden dimensionality of all MLPs is set to 2 and 32, respectively, and we apply Re LU activation (Glorot et al., 2011) and Batch normalization (Ioffe & Szegedy, 2015) after each of its layers. ... We train and test our procedure with L(train) = 10 and L(test) = 20 refinement iterations, respectively. ... Our network architecture consists of two convolutional layers (T = 2), followed by dropout with probability 0.5, and a final linear layer. ... We use a three-layer GNN (T = 3) both for obtaining initial similarities and for refining alignments with dimensionality 256 and 32, respectively. ... For efficiency reasons, we train L(initial) and L(refined) sequentially for 100 epochs each. |