Deeper Insights Into Graph Convolutional Networks for Semi-Supervised Learning
Authors: Qimai Li, Zhichao Han, Xiao-ming Wu
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmarks have veriļ¬ed our theory and proposals. |
| Researcher Affiliation | Academia | Qimai Li,1 Zhichao Han,1,2 Xiao-Ming Wu1 1The Hong Kong Polytechnic University 2ETH Zurich |
| Pseudocode | Yes | Algorithm 1 Expand the Label Set via Par Walks; Algorithm 2 Expand the Label Set via Self-Training |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the methodology. |
| Open Datasets | Yes | We conduct experiments on three commonly used citation networks: Cite Seer, Cora, and Pub Med (Sen et al. 2008). The datasets we use for testing are provided by the authors of (Yang, Cohen, and Salakhutdinov 2016) and (Kipf and Welling 2017). |
| Dataset Splits | Yes | For each run, we randomly split labels into a small set for training, and a set with 1000 samples for testing. For GCN+V, we follow (Kipf and Welling 2017) to sample additional 500 labels for validation. |
| Hardware Specification | No | The paper states 'On Pub Med, it takes less than 0.38 seconds in Mat Lab R2015b' when discussing computational cost, but it does not specify any hardware details like GPU or CPU models used for the experiments. |
| Software Dependencies | Yes | On Pub Med, it takes less than 0.38 seconds in Mat Lab R2015b. |
| Experiment Setup | Yes | For GCNs, we use the same hyper-parameters as in (Kipf and Welling 2017): a learning rate of 0.01, 200 maximum epochs, 0.5 dropout rate, 5e-4 L2 regularization weight, 2 convolutional layers, and 16 hidden units, which are validated on Cora by Kipf and Welling. |