Simple and Deep Graph Convolutional Networks

Authors: Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, Yaliang Li

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of GCNII against the state-of-the-art graph neural network models on a wide variety of open graph datasets. [...] We use three standard citation network datasets Cora, Citeseer, and Pubmed (Sen et al., 2008) for semi-supervised node classification. [...] Table 2 reports the mean classification accuracy (%) results on Cora, Citeseer, and Pubmed.
Researcher Affiliation Collaboration 1School of Information, Renmin University of China 2Gaoling School of Articial Intelligence, Renmin University of China [...] 5School of Data Science, Fudan University 6Alibaba Group.
Pseudocode No The paper does not include any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm', nor are there any clearly formatted pseudocode blocks.
Open Source Code No The paper does not include an explicit statement from the authors about releasing their source code for the GCNII methodology, nor does it provide a direct link to such a repository.
Open Datasets Yes We use three standard citation network datasets Cora, Citeseer, and Pubmed (Sen et al., 2008) for semi-supervised node classification. [...] For full-supervised node classification, we also include Chameleon (Rozemberczki et al., 2019), Cornell, Texas, and Wisconsin (Pei et al., 2020). [...] For inductive learning, we use Protein-Protein Interaction (PPI) networks (Hamilton et al., 2017).
Dataset Splits Yes For the semi-supervised node classification task, we apply the standard fixed training/validation/testing split (Yang et al., 2016) on three datasets Cora, Citeseer, and Pubmed, with 20 nodes per class for training, 500 nodes for validation and 1,000 nodes for testing. [...] we randomly split nodes of each class into 60%, 20%, and 20% for training, validation and testing
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory specifications) used to run the experiments.
Software Dependencies No The paper does not specify the version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow) used in the implementation or for running the experiments.
Experiment Setup Yes We use the Adam SGD optimizer (Kingma & Ba, 2015) with a learning rate of 0.01 and early stopping with a patience of 100 epochs to train GCNII and GCNII*. We set αℓ= 0.1 and L2 regularization to 0.0005 for the dense layer on all datasets. [...] We fix the learning rate to 0.01, dropout rate to 0.5 and the number of hidden units to 64 on all datasets.