Graph-Coupled Oscillator Networks

Authors: T. Konstantin Rusch, Ben Chamberlain, James Rowbottom, Siddhartha Mishra, Michael Bronstein

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide an extensive empirical evaluation of Graph CON on a wide variety of graph learning tasks such as transductive and inductive node classification and graph regression and classification, demonstrating that Graph CON achieves competitive performance.
Researcher Affiliation Collaboration 1Seminar for Applied Mathematics (SAM), D-MATH, ETH Z urich, Switzerland 2ETH AI Center, ETH Z urich 3Twitter Inc., London, UK 4Department of Computer Science, University of Oxford, UK.
Pseudocode No The paper describes the methods using mathematical equations and textual explanations but does not include any explicit pseudocode blocks or algorithms.
Open Source Code Yes All code to reproduce our results can be found at https://github.com/tk-rusch/Graph CON.
Open Datasets Yes Homophilic datasets. We consider three widely used node classification tasks, based on the citation networks Cora (Mc Callum et al., 2000), Citeseer (Sen et al., 2008) and Pubmed (Namata et al., 2012).
Dataset Splits Yes We follow the evaluation protocols and training, validation, and test splits of Shchur et al. (2018); Chamberlain et al. (2021b), using only on the largest connected component in each network.
Hardware Specification Yes All experiments were run on NVIDIA Ge Force GTX 1080 Ti, RTX 2080 Ti as well as RTX 2080 Ti GPUs.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow).
Experiment Setup Yes The tuning of the hyperparameters was done using a standard random search algorithm. We fix the time-step t in (4) to 1 in all experiments. The damping parameter α as well as the frequency control parameter γ are set to 1 for all Cora, Citeseer and Pubmed experiments, while we set them to 0 for all experiments based on the Texas, Cornell and Wisconsin network graphs. For all other experiments we include α and γ to the hyperparameter search-space. The tuned values can be found in Table 9.