Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Greener GRASS: Enhancing GNNs with Encoding, Rewiring, and Attention
Authors: Tongzhou Liao, Barnabás Póczos
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluations demonstrate that GRASS achieves state-of-the-art performance on multiple benchmark datasets, including a 20.3% reduction in mean absolute error on the ZINC dataset. |
| Researcher Affiliation | Academia | Tongzhou Liao School of Computer Science Carnegie Mellon University Pittsburgh, USA Barnab as P oczos School of Computer Science Carnegie Mellon University Pittsburgh, USA |
| Pseudocode | Yes | A.2 PSEUDOCODE OF THE PERMUTATION MODEL Algorithm 1 The Permutation Model (Friedman et al., 1989) |
| Open Source Code | Yes | The source code of GRASS is available at https://github.com/grass-gnn/grass. |
| Open Datasets | Yes | To measure the performance of GRASS, we train and evaluate it on five of the GNN Benchmark Datasets (Dwivedi et al., 2023): ZINC, MNIST, CIFAR10, CLUSTER, and PATTERN, as well as four of the Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022) datasets: Peptides-func, Peptides-struct, Pascal VOC-SP, and COCO-SP. |
| Dataset Splits | Yes | To measure the performance of GRASS, we train and evaluate it on five of the GNN Benchmark Datasets (Dwivedi et al., 2023): ZINC, MNIST, CIFAR10, CLUSTER, and PATTERN, as well as four of the Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022) datasets: Peptides-func, Peptides-struct, Pascal VOC-SP, and COCO-SP. Following the experimental setup of Ramp aˇsek et al. (2022) and other work that we compare, we configure GRASS to around 100k parameters for MNIST and CIFAR10, and 500k parameters for all other datasets. |
| Hardware Specification | Yes | CPU AMD Ryzen 9 9950X GPU NVIDIA RTX A6000 Ada |
| Software Dependencies | No | Models are trained with the Lion optimizer (Chen et al., 2023). ... The implementations of replacement attention mechanisms are provided by Py Torch Geometric (Fey and Lenssen, 2019)... No specific version numbers for software libraries are provided. |
| Experiment Setup | Yes | Table 12: Model hyperparameters for experiments on GNN Benchmark Datasets. Table 13: Model hyperparameters for experiments on LRGB datasets and the roman-empire dataset. |