G$^2$N$^2$ : Weisfeiler and Lehman go grammatical
Authors: Jason Piquenot, Aldo Moscatelli, Maxime Berar, Pierre Héroux, Romain Raveaux, Jean-Yves RAMEL, Sébastien Adam
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through various experiments, we demonstrate the superior efficiency of G2N2 compared to other 3-WL GNNs across numerous downstream tasks. This section is dedicated to the experimental validation of both the framework and G2N2. |
| Researcher Affiliation | Academia | LITIS Lab, University of Rouen Normandy, France LIFAT Lab, University of Tours, France |
| Pseudocode | No | The paper describes procedures using figures and text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | Yes | We use a graph regression benchmark called QM9 which is composed of 130K small molecules (Ramakrishnan et al. (2014); Wu et al. (2018)). For graph classification, we evaluate G2N2 on the classical TUD benchmark (Morris et al. (2020)). |
| Dataset Splits | Yes | As in Maron et al. (2019a), the dataset is randomly split into training, validation, and test sets with a respective ratio of 0.8, 0.1 and 0.1. |
| Hardware Specification | No | The paper states that "The mean epoch duration is measured on the same device" but does not provide any specific details about the hardware used for experiments. |
| Software Dependencies | No | The paper does not specify the version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | In the experiments, all the linear blocks of a layer are set at the same width S(l) = b(l) = b(l) = b(l) diag. This means that MLP(l) M takes as input a third order tensor of dimensions n n 4S(l) and MLP(l) Vc takes as input a matrix of dimensions n 2S(l). At each layer, the MLP depth is always 2 and the intermediate layer doubled the input dimension. The parameter setting for each of the 6 experiments related to these datasets can be found in Table 5 of appendix C. |