GATE: How to Keep Out Intrusive Neighbors
Authors: Nimrah Mustafa, Rebekka Burkholz
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments We validate the ability of GATE to perform the appropriate amount of neighborhood aggregation, as relevant for the given task and input graph, on both synthetic and real-world graphs. |
| Researcher Affiliation | Academia | 1CISPA Helmholtz Center for Information Security, 66123 Saarbr ucken, Germany. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our experimental code is available at https://github.com/Relational ML/GATE.git. |
| Open Datasets | Yes | On real-world datasets, GATE performs competitively on homophilic datasets and is substantially better than GAT on heterophilic datasets. Furthermore, up to our knowledge, it achieves a new state of the art on the relatively large OGB-arxiv dataset (i.e., 79.57 0.84% test accuracy). ... We evaluate GATE on relatively large-scale real-world node classification tasks, namely on five heterophilic benchmark datasets (Platonov et al., 2023) (see Table 3) and three OGB datasets (Hu et al., 2021) (see Table 5). |
| Dataset Splits | Yes | Nodes are divided randomly into train/validation/test split with a 2 : 1 : 1 ratio. ... Real-world datasets use their standard train/test/validation splits, i.e. those provided by Pytorch Geometric for Planetoid datasets Cora and Citeseer, by OGB framework for OGB datasets, and by (Platonov et al., 2023) for all remaining real-world datasets. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments. |
| Software Dependencies | No | The paper mentions Pytorch Geometric and Adam optimizer but does not specify their version numbers or the versions of other key software dependencies. |
| Experiment Setup | Yes | For synthetic datasets, the network width is fixed to 64 in all cases. ... For all synthetic data, a learning rate of 0.005 is used. Real-world datasets use their standard train/test/validation splits... the learning rate is adjusted for different real-world datasets to enable stable training of models as specified in Table 6. |