SeedGNN: Graph Neural Network for Supervised Seeded Graph Matching
Authors: Liren Yu, Jiaming Xu, Xiaojun Lin
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate Seed GNN on synthetic and real-world graphs and demonstrate significant performance improvements over both non-learning and learning algorithms in the existing literature. Furthermore, our experiments confirm that the knowledge learned by Seed GNN from training graphs can be generalized to test graphs of different sizes and categories. |
| Researcher Affiliation | Academia | 1Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, USA 2The Fuqua School of Business, Duke University, Durham, North Carolina, USA. |
| Pseudocode | No | The paper describes its architecture and procedures in text and figures but does not include any formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/Leron33/SeedGNN. |
| Open Datasets | Yes | We use the correlated Erd os-R enyi graph model (Pedarsani & Grossglauser, 2011), Facebook networks in (Traud et al., 2012), SHREC 16 computer vision dataset in (L ahner et al., 2016), and Willow Object dataset (Cho et al., 2013) in our experiments. |
| Dataset Splits | No | The paper specifies a 'training set' and 'test set' but does not explicitly mention a 'validation set' or provide details about a validation split. |
| Hardware Specification | Yes | Our model is implemented using Py Torch (Paszke et al., 2019) and trained on an Intel Core i7-8750H CPU. |
| Software Dependencies | No | The paper states 'Our model is implemented using Py Torch (Paszke et al., 2019)' but does not provide a specific version number for PyTorch or other software dependencies. |
| Experiment Setup | Yes | In our experiment, the number of Seed GNN layers is fixed to 6. We implement the operators ϕl and ρl as two-layer neural networks with dl = 16. For all experiments, optimization is done via ADAM (Kingma & Ba, 2015) with a fixed learning rate of 10^-2. The training batch size is 64. The overall training for 500 epochs takes about 12 hours and requires 2.68 GB memory. |