Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Universality of Graph Neural Networks on Large Random Graphs

Authors: Nicolas Keriven, Alberto Bietti, Samuel Vaiter

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The code for the numerical illustrations is available at https://github.com/ nkeriven/random-graph-gnn. Figure 3: Illustration of Prop. 9 on Gaussian kernel, but with random edges and two-hop input ๏ฌltering. The x-axis is the latent variables in X = [ 1, 1]. The y-axis is the output of a SGNN trained to approximate some function f (red curve).
Researcher Affiliation Academia Nicolas Keriven CNRS, GIPSA-lab, Grenoble, France EMAIL Alberto Bietti NYU Center for Data Science, New York, USA EMAIL Samuel Vaiter CNRS, LJAD, Nice, France EMAIL
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code for the numerical illustrations is available at https://github.com/ nkeriven/random-graph-gnn.
Open Datasets No The paper discusses theoretical models of random graphs like SBMs and radial kernels, but does not specify a publicly available named dataset for training purposes.
Dataset Splits No The paper does not specify dataset splits (e.g., train/validation/test percentages or counts) needed to reproduce the experiment.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper does not contain specific experimental setup details, such as concrete hyperparameter values or training configurations, in the main text.