Semi-Supervised Classification with Graph Convolutional Networks

Authors: Thomas N. Kipf, Max Welling

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a number of datasets demonstrate that our model compares favorably both in classification accuracy and efficiency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning.
Researcher Affiliation Academia Thomas N. Kipf University of Amsterdam T.N.Kipf@uva.nl Max Welling University of Amsterdam Canadian Institute for Advanced Research (CIFAR) M.Welling@uva.nl
Pseudocode Yes Algorithm 1: WL-1 algorithm (Weisfeiler & Lehmann, 1968)
Open Source Code Yes Code to reproduce our experiments is available at https://github.com/tkipf/gcn.
Open Datasets Yes We consider three citation network datasets: Citeseer, Cora and Pubmed (Sen et al., 2008)... NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010).
Dataset Splits Yes We choose the same dataset splits as in Yang et al. (2016) with an additional validation set of 500 labeled examples for hyperparameter optimization... We train all models for a maximum of 200 epochs (training iterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0.01 and early stopping with a window size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutive epochs.
Hardware Specification Yes Hardware used: 16-core Intel R Xeon R CPU E5-2640 v3 @ 2.60GHz, Ge Force R GTX TITAN X
Software Dependencies No In practice, we make use of Tensor Flow (Abadi et al., 2015) for an efficient GPU-based implementation2 of Eq. 9 using sparse-dense matrix multiplications. The paper mentions TensorFlow but does not specify a version number.
Experiment Setup Yes We train all models for a maximum of 200 epochs (training iterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0.01 and early stopping with a window size of 10... We used the following sets of hyperparameters for Citeseer, Cora and Pubmed: 0.5 (dropout rate), 5 10 4 (L2 regularization) and 16 (number of hidden units); and for NELL: 0.1 (dropout rate), 1 10 5 (L2 regularization) and 64 (number of hidden units).