How Powerful are Graph Neural Networks?
Authors: Keyulu Xu*, Weihua Hu*, Jure Leskovec, Stefanie Jegelka
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. |
| Researcher Affiliation | Academia | Keyulu Xu MIT keyulu@mit.edu Weihua Hu Stanford University weihuahu@stanford.edu Jure Leskovec Stanford University jure@cs.stanford.edu Stefanie Jegelka MIT stefje@mit.edu |
| Pseudocode | No | The paper uses mathematical equations to describe the model updates (e.g., Eq 2.1, 4.1) but does not include explicit pseudocode blocks or algorithms. |
| Open Source Code | Yes | 1The code is available at https://github.com/weihua916/powerful-gnns. |
| Open Datasets | Yes | We use 9 graph classification benchmarks: 4 bioinformatics datasets (MUTAG, PTC, NCI1, PROTEINS) and 5 social network datasets (COLLAB, IMDB-BINARY, IMDB-MULTI, REDDITBINARY and REDDIT-MULTI5K) (Yanardag & Vishwanathan, 2015). |
| Dataset Splits | Yes | Following (Yanardag & Vishwanathan, 2015; Niepert et al., 2016), we perform 10-fold crossvalidation with LIB-SVM (Chang & Lin, 2011). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions software components like 'LIB-SVM', 'Adam optimizer', and 'Batch normalization' but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | For all configurations, 5 GNN layers (including the input layer) are applied, and all MLPs have 2 layers. Batch normalization (Ioffe & Szegedy, 2015) is applied on every hidden layer. We use the Adam optimizer (Kingma & Ba, 2015) with initial learning rate 0.01 and decay the learning rate by 0.5 every 50 epochs. The hyper-parameters we tune for each dataset are: (1) the number of hidden units {16, 32} for bioinformatics graphs and 64 for social graphs; (2) the batch size {32, 128}; (3) the dropout ratio {0, 0.5} after the dense layer (Srivastava et al., 2014); (4) the number of epochs, i.e., a single epoch with the best cross-validation accuracy averaged over the 10 folds was selected. |