Convergence and Stability of Graph Convolutional Networks on Large Random Graphs
Authors: Nicolas Keriven, Alberto Bietti, Samuel Vaiter
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide simple numerical experiments on synthetic data that illustrate the convergence and stability of GCNs. We consider untrained GCNs with random weights in order to assess how these properties result from the choice of architecture rather than learning. The code is accessible at https://github.com/nkeriven/random-graph-gnn. |
| Researcher Affiliation | Academia | Nicolas Keriven CNRS, GIPSA-lab, Grenoble, France nicolas.keriven@cnrs.fr; Alberto Bietti NYU Center for Data Science, New York, USA alberto.bietti@nyu.edu; Samuel Vaiter CNRS, IMB, Dijon, France samuel.vaiter@u-bourgogne.fr |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code is accessible at https://github.com/nkeriven/random-graph-gnn. |
| Open Datasets | No | The paper does not use a publicly available dataset. It uses 'synthetic data' generated by sampling nodes on a 3-dimensional manifold to illustrate theoretical properties. |
| Dataset Splits | No | The paper focuses on numerical illustrations of theoretical properties using synthetic data and untrained GCNs, rather than a standard machine learning setup with explicit training, validation, and test splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the numerical experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python version, library versions). |
| Experiment Setup | No | The paper mentions using 'untrained GCNs with random weights' for its numerical illustrations but does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, epochs, optimizer settings) or other system-level training configurations. |