Cooperative Graph Neural Networks

Authors: Ben Finkelshtein, Xingyue Huang, Michael M. Bronstein, Ismail Ilkan Ceylan

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide a theoretical analysis of the new message-passing scheme which is further supported by an extensive empirical analysis on synthetic and real-world data. Empirically, we focus on CO-GNNs with basic action and environment networks to carefully assess the virtue of the new message-passing paradigm. We first validate the strength of our approach on a synthetic task (Section 6.1). Then, we conduct experiments on real-world datasets, and observe that CO-GNNs always improve compared to their baseline models, and yield multiple state-of-the-art results (Section 6.2 and Appendix C.3).
Researcher Affiliation Academia 1Department of Computer Science, University of Oxford. Correspondence to: name surname <{name.surname}@cs.ox.ac.uk>.
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our codebase is available at https: //github.com/benfinkelshtein/Co GNN.
Open Datasets Yes We evaluate CO-GNNs on a synthetic experiment, and on real-world node classification datasets (Platonov et al., 2023). We also report a synthetic expressiveness experiment, an experiment on long-range interactions datasets (Dwivedi et al., 2022), and graph classification datasets (Morris et al., 2020) in Appendix C. The statistics of the real-world long-range, node-based, and graph-based benchmarks used can be found in Tables 6 to 9.
Dataset Splits Yes We consider a train, validation, and test split of equal size. We evaluate SUMGNN, MEANGNN and their CO-GNN counterparts, CO-GNN(Σ, Σ) and CO-GNN(µ, µ) on the 5 heterophilic graphs, following the 10 data splits and the methodology of Platonov et al. (2023).
Hardware Specification No The acknowledgments mention 'the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work (http://dx.doi.org/10.5281/zenodo.22558)'. However, no specific details like CPU/GPU models, memory, or cluster specifications are provided.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'Adam W as optimizer', but it does not specify any software libraries or frameworks (e.g., PyTorch, TensorFlow) with their version numbers that are crucial for reproducibility.
Experiment Setup Yes We report the Mean Average Error (MAE), use the Adam optimizer and present all details including the hyperparameters in Appendix E.4. In Tables 10 to 14, we report the hyperparameters used in our experiments. These tables include details such as number of layers, dimension, learned temperature, τ0, number of epochs, dropout, learning rate, batch size, activation function, skip connections, and scheduler settings.