Graph Neural Networks Exponentially Lose Expressive Power for Node Classification

Authors: Kenta Oono, Taiji Suzuki

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally confirm that the proposed weight scaling enhances the predictive performance of GCNs in real data.
Researcher Affiliation Collaboration Kenta Oono1, 2, Taiji Suzuki1, 3 {kenta oono, taiji}@mist.i.u-tokyo.ac.jp 1The University of Tokyo 2Preferred Networks, Inc. 3RIKEN Center for Advanced Intelligence Project (AIP)
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/delta2323/gnn-asymptotics.
Open Datasets Yes We use Cora, Cite Seer, and Pub Med (Sen et al., 2008), which are standard citation network datasets.
Dataset Splits Yes We split all nodes in a graph (either Noisy Cora 2500/5000 or Noisy Cite Seer) into training, validation, and test sets. Data split is the same as the one done by Kipf & Welling (2017).
Hardware Specification Yes We conducted experiments in a signel machine which has 2 Intel(R) Xeon(R) Gold 6136 CPU@3.00GHz (24 cores), 192 GB memory (DDR4), and 3 GPGPUs (NVIDIA Tesla V100).
Software Dependencies No We used Chainer Chemistry, which is an extension library for the deep learning framework Chainer (Tokui et al., 2015; 2019), to implement GCNs and Optuna (Akiba et al., 2019) for hyperparameter tuning. The paper names software components but does not provide specific version numbers for them.
Experiment Setup Yes Table 3 shows the set of hyperparameters from which we chose.