Effects of Graph Convolutions in Multi-layer Networks
Authors: Aseem Baranwal, Kimon Fountoulakis, Aukosh Jagannath
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present extensive experiments on both synthetic and real-world data that illustrate our results. |
| Researcher Affiliation | Academia | David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada. {aseem.baranwal,kimon.fountoulakis}@uwaterloo.ca Department of Statistics and Actuarial Science, Department of Applied Mathematics, University of Waterloo, Waterloo, Canada. a.jagannath@uwaterloo.ca |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | For real-world data, we test our results on three graph benchmarks: CORA, Cite Seer, and Pubmed citation network datasets (Sen et al., 2008). Results for larger datasets are presented in Appendix B.2. ... Furthermore, we perform the same experiments on relatively larger datasets, OGBN-ar Xiv and OGBN-products (Hu et al., 2020). |
| Dataset Splits | No | The paper mentions 'public splits for the real datasets' but does not provide specific percentages, sample counts, or detailed splitting methodology for training, validation, or test sets. |
| Hardware Specification | Yes | The models were trained on an Nvidia Titan Xp GPU, using the Adam optimizer with learning rate 10-3, weight decay 10-5, and 50 to 500 epochs varying among the datasets. |
| Software Dependencies | No | The paper mentions 'Py Torch Geometric (Fey & Lenssen, 2019)' but does not specify its version number, nor does it list specific version numbers for other key software components or libraries. |
| Experiment Setup | Yes | The models were trained on an Nvidia Titan Xp GPU, using the Adam optimizer with learning rate 10-3, weight decay 10-5, and 50 to 500 epochs varying among the datasets. ... For 2-layer networks, the hidden layer has width 16, and for 3-layer networks, both hidden layers have width 16. We use a dropout probability of 0.5 and a weight decay of 10-5 while training. |