A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks
Authors: Renjie Liao, Raquel Urtasun, Richard Zemel
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an empirical study on several synthetic and real-world graph datasets and verify that our PAC-Bayes bound is tighter than others. |
| Researcher Affiliation | Academia | University of Toronto1, Vector Institute3, Canadian Institute for Advanced Research3 {rjliao, urtasun, zemel}@cs.toronto.edu |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks in the provided text. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described. |
| Open Datasets | Yes | We experiment on 6 synthetic datasets of random graphs (corresponding to 6 random graph models), 3 social network datasets (COLLAB, IMDB-BINARY, IMDB-MULTI), and a bioinformatics dataset PROTEINS from (Yanardag & Vishwanathan, 2015). |
| Dataset Splits | No | The paper mentions a "training set S with size m" but does not specify any train/validation/test splits (e.g., percentages or counts) within the provided text. It defers to Appendix A.7 for more details but does not provide them here. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch 1.9). |
| Experiment Setup | No | The paper mentions that "More details of the experimental setup, dataset statistics, and the bound computation are provided in Appendix A.7.", and that "Constants are considered in the bound computation." However, it does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, number of epochs) or optimizer settings within the provided text. |