Simplifying Graph Convolutional Networks
Authors: Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, Kilian Weinberger
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first evaluate SGC on citation networks and social networks and then extend our empirical analysis to a wide range of downstream tasks. |
| Researcher Affiliation | Academia | 1Cornell University 2Federal Institute of Ceara (Brazil). |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available on Github1. 1https://github.com/Tiiiger/SGC |
| Open Datasets | Yes | We evaluate the semi-supervised node classification performance of SGC on the Cora, Citeseer, and Pubmed citation network datasets (Table 2) (Sen et al., 2008). We supplement our citation network analysis by using SGC to inductively predict community structure on Reddit (Table 3), which consists of a much larger graph. Dataset statistics are summarized in Table 1. |
| Dataset Splits | Yes | Dataset statistics are summarized in Table 1. ... we tune this hyperparameter on each dataset using hyperopt (Bergstra et al., 2015) for 60 iterations on the public split validation set. ... Table 1. Dataset statistics of the citation networks and Reddit. Dataset # Nodes # Edges Train/Dev/Test Nodes Cora 2,708 5,429 140/500/1,000 |
| Hardware Specification | Yes | We measure the training time on a NVIDIA GTX 1080 Ti GPU and present the benchmark details in supplementary materials. |
| Software Dependencies | No | The paper mentions software like Adam and hyperopt but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | On the citation networks, we train SGC for 100 epochs using Adam (Kingma & Ba, 2015) with learning rate 0.2. In addition, we use weight decay and tune this hyperparameter on each dataset using hyperopt (Bergstra et al., 2015) for 60 iterations on the public split validation set. ... On the Reddit dataset, we train SGC with L-BFGS (Liu & Nocedal, 1989) using no regularization, and remarkably, training converges in 2 steps. |