On Provable Benefits of Depth in Training Graph Convolutional Networks

Authors: Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct empirical evaluations on various synthetic and real-world datasets to validate the correctness of our theory.
Researcher Affiliation Academia Weilin Cong Penn State wxc272@psu.edu Morteza Ramezani Penn State morteza@cse.psu.edu Mehrdad Mahdavi Penn State mzm616@psu.edu
Pseudocode No The paper describes the proposed DGCN model using mathematical formulations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes A simple experiment using DGL can be found here.1 https://github.com/Weilin-Cong/on-provable-benefits-of-depth-in-training-graph-convolutional-networks
Open Datasets Yes We conduct empirical evaluations on various synthetic and real-world datasets to validate the correctness of our theory... open graph benchmark (OGB) dataset... Arxiv dataset, and no dropout for Products and Protein datasets.
Dataset Splits Yes We chose 75% nodes as training set, 15% of nodes as validation set for hyper-parameter tuning, and the remaining nodes as testing set.
Hardware Specification No The paper mentions 'Due to limited GPU memory' in the context of selecting the number of layers, but it does not specify any particular GPU models, CPU models, or other hardware specifications used for running the experiments.
Software Dependencies No The paper mentions 'DGL' and 'Adam' optimizer but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We choose the hidden dimension as 128, learning rate as 0.01, dropout ratio as 0.5 for Arxiv dataset, and no dropout for Products and Protein datasets. We train 300/1000/500 epochs for Products, Proteins, and Arxiv dataset respectively... We choose αℓfrom {0.9, 0.8, 0.5} for APPNP and GCNII, and use βℓ= 0.5/ℓfor GCNII, and select the setup with the best validation result for comparison.