Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On Provable Benefits of Depth in Training Graph Convolutional Networks
Authors: Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct empirical evaluations on various synthetic and real-world datasets to validate the correctness of our theory. |
| Researcher Affiliation | Academia | Weilin Cong Penn State EMAIL Morteza Ramezani Penn State EMAIL Mehrdad Mahdavi Penn State EMAIL |
| Pseudocode | No | The paper describes the proposed DGCN model using mathematical formulations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | A simple experiment using DGL can be found here.1 https://github.com/Weilin-Cong/on-provable-benefits-of-depth-in-training-graph-convolutional-networks |
| Open Datasets | Yes | We conduct empirical evaluations on various synthetic and real-world datasets to validate the correctness of our theory... open graph benchmark (OGB) dataset... Arxiv dataset, and no dropout for Products and Protein datasets. |
| Dataset Splits | Yes | We chose 75% nodes as training set, 15% of nodes as validation set for hyper-parameter tuning, and the remaining nodes as testing set. |
| Hardware Specification | No | The paper mentions 'Due to limited GPU memory' in the context of selecting the number of layers, but it does not specify any particular GPU models, CPU models, or other hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions 'DGL' and 'Adam' optimizer but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We choose the hidden dimension as 128, learning rate as 0.01, dropout ratio as 0.5 for Arxiv dataset, and no dropout for Products and Protein datasets. We train 300/1000/500 epochs for Products, Proteins, and Arxiv dataset respectively... We choose ฮฑโfrom {0.9, 0.8, 0.5} for APPNP and GCNII, and use ฮฒโ= 0.5/โfor GCNII, and select the setup with the best validation result for comparison. |