Gradient Gating for Deep Multi-Rate Learning on Graphs

Authors: T. Konstantin Rusch, Benjamin Paul Chamberlain, Michael W. Mahoney, Michael M. Bronstein, Siddhartha Mishra

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results are presented to demonstrate that the proposed framework achieves state-of-the-art performance on a variety of graph learning tasks, including on large-scale heterophilic graphs.
Researcher Affiliation Collaboration T. Konstantin Rusch ETH Z urich, ICSI and UC Berkeley Benjamin P. Chamberlain Charm Therapeutics Michael W. Mahoney ICSI, LBNL, and UC Berkeley Michael M. Bronstein University of Oxford Siddhartha Mishra ETH Z urich
Pseudocode No The paper provides mathematical formulations and a schematic diagram (Figure 1) of the architecture, but no formal pseudocode or algorithm blocks.
Open Source Code No No explicit statement or link indicating the public release of source code for the described methodology was found.
Open Datasets Yes We propose regression experiments based on the Wikipedia article networks Chameleon and Squirrel, (Rozemberczki et al., 2021). We test G2 on a node-level classification task with varying levels of homophily on the synthetic Cora dataset Zhu et al. (2020). In Table 2, we test the proposed framework on several real-world heterophilic graphs (with a homophily level of 0.30) (Pei et al., 2020; Rozemberczki et al., 2021) To this end, we consider three different experiments based on large graphs from Lim et al. (2021)
Dataset Splits Yes Table 1 shows the test normalized mean-square error (mean and standard deviation based on the ten pre-defined splits in Pei et al. (2020))
Hardware Specification Yes All small and medium-scale experiments have been run on NVIDIA Ge Force RTX 2080 Ti, Ge Force RTX 3090, TITAN RTX and Quadro RTX 6000 GPUs. The large-scale experiments have been run on Nvidia Tesla A100 (40Gi B) GPUs.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used, only mentioning general tools common in machine learning.
Experiment Setup Yes All hyperparameters were tuned using random search. Table 7 shows the ranges of each hyperparameter as well as the random distribution used to randomly sample from it.