Almost Surely Asymptotically Constant Graph Neural Networks
Authors: Sam Adam-Day, Michael Benedikt, Ismail Ceylan, Ben Finkelshtein
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs. |
| Researcher Affiliation | Academia | Sam Adam-Day Michael Benedikt Ismail Ilkan Ceylan Ben Finkelshtein Department of Computer Science University of Oxford Oxford, UK |
| Pseudocode | No | The paper describes algorithms and interpretations in text and mathematical formulas but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | All our experiments were run on a single NVidia GTX V100 GPU. We made our codebase available online at https://github.com/benfinkelshtein/GNN-Asymptotically-Constant. |
| Open Datasets | Yes | We used the TIGER-Alaska dataset [16] of geographic faces. The original dataset has 93366 nodes, while Dimitrov et al. [10] extracted smaller datasets with graphs having 1K, 5K, 10K, 25K and 90K nodes. |
| Dataset Splits | No | The paper discusses drawing samples of graphs of increasing sizes and using certain models, but it does not specify explicit training, validation, and test dataset splits in the conventional machine learning sense for model development or evaluation. |
| Hardware Specification | Yes | All our experiments were run on a single NVidia GTX V100 GPU. |
| Software Dependencies | No | The paper mentions software components like "ReLU non-linearity" and "softmax function" but does not specify version numbers for these or any other software libraries or frameworks used (e.g., PyTorch, TensorFlow, etc.). |
| Experiment Setup | Yes | We consider five models with the same architecture, each having randomly initialized weights, utilizing a Re LU non-linearity, and applying a softmax function to their outputs. Each model uses a hidden dimension of 128, 3 layers and an output dimension of 5. |