On the Expressive Power of Geometric Graph Neural Networks
Authors: Chaitanya K. Joshi, Cristian Bodnar, Simon V Mathis, Taco Cohen, Pietro Lio
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Synthetic experiments supplementing our results are available at https://github.com/ chaitjo/geometric-gnn-dojo. In Section 5, we provide synthetic experiments to supplement our theoretical results and highlight practical challenges in building maximally powerful geometric GNNs, s.a. geometric oversquashing with increased depth, and counterexamples that highlight the utility of higher order order spherical tensors. |
| Researcher Affiliation | Collaboration | 1University of Cambridge, UK 2Qualcomm AI Research, The Netherlands. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. |
| Pseudocode | No | No structured pseudocode or algorithm blocks are present. The methodology is described using mathematical equations and prose. |
| Open Source Code | Yes | Synthetic experiments supplementing our results are available at https://github.com/ chaitjo/geometric-gnn-dojo |
| Open Datasets | No | No concrete access information for a publicly available or open dataset is provided. The paper states: 'We design synthetic experiments to highlight practical challenges in building expressive geometric GNNs: Distinguishing k-chains... Rotationally symmetric structures... Counterexamples from Pozdnyakov et al. (2020)...' |
| Dataset Splits | No | No specific dataset split information (like train/validation/test percentages or counts) is provided. The experiments use synthetically generated data for specific tasks without explicit splits. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or cluster specifications) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | No specific version numbers for software dependencies are provided. The paper mentions using libraries such as 'Py Torch Geometric' and 'e3nn' without specifying their versions: 'For Sch Net and Dime Net, we use the implementation from Py Torch Geometric (Fey & Lenssen, 2019). Our TFN implementation is based on e3nn (Geiger & Smidt, 2022).' |
| Experiment Setup | Yes | We set scalar feature channels to 128 for Sch Net, Dime Net, Sphere Net, and E-GNN. We set scalar/vector/tensor feature channels to 64 for GVP-GNN, TFN, MACE. TFN and MACE use order L = 2 tensors by default. MACE uses local body order 4 by default. We train all models for 100 epochs using the Adam optimiser, with an initial learning rate 1e-4, which we reduce by a factor of 0.9 and patience of 25 epochs when the performance plateaus. All results are averaged across 10 random seeds. |