Towards Understanding Generalization of Graph Neural Networks
Authors: Huayi Tang, Yong Liu
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets validate the theoretical findings. |
| Researcher Affiliation | Academia | 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 SGD for Transductive Learning |
| Open Source Code | No | For GCN, GAT, SGC, APPNP and GCNII, we adopt the official PyTorch Geometric library implementations (Fey & Lenssen, 2019). For GPRGNN, we adopt the released codes 1 with commit number 2507f10. 1https://github.com/jianhao2016/GPRGNN. The paper uses existing third-party code for the models it analyzes, but does not provide its own open-source code for the theoretical analysis or experimental validation methodology described. |
| Open Datasets | Yes | We conduct experiments on widely adopted benchmark datasets, including Cora, Citeseer, and Pubmed (Sen et al., 2008; Yang et al., 2016). ... Moreover, we also conduct experiments on large-scale dataset ogbn-arxiv (Hu et al., 2020). |
| Dataset Splits | No | Following the standard transductive learning setting, in each run, 30% sampled nodes determined by a random seed are used as training set and the rest nodes are treated as test set. ... T is determined by the performance of model on validation set. While the paper describes a training/test split (30%/70%) for some datasets and mentions using a validation set for early stopping, it does not explicitly provide the specific split percentages or methodology for the validation set for all experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | For GCN, GAT, SGC, APPNP and GCNII, we adopt the official PyTorch Geometric library implementations (Fey & Lenssen, 2019). ... For GPRGNN, we adopt the released codes 1 with commit number 2507f10. The paper mentions software libraries like PyTorch Geometric but does not provide specific version numbers for these or other key software components, which is necessary for full reproducibility. |
| Experiment Setup | Yes | The batch size is set to 512 and the number of hidden units are set to 64 for all baseline models. ... K is set to 10 for APPNP and GPRGNN. ... The number of iterations is fixed to T = 300. ... For ogbn-arxiv, following the official implementation in (Hu et al., 2020), we adopt the Adam optimizer with learning rate 0.01. We set T = 700 and adopt the standard split. ... We remove all dropout layers and adopt the Adam optimizer with default setting. |