Path Neural Networks: Expressive and Accurate Graph Neural Networks
Authors: Gaspard Michel, Giannis Nikolentzos, Johannes F. Lutzeyer, Michalis Vazirgiannis
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now evaluate the performance of our Path NNs in synthetic experiments specifically designed to exhibit the expressiveness of GNNs in Section 4.1 and on a range of real-world datasets in Section 4.2. |
| Researcher Affiliation | Collaboration | 1LIX, Ecole Polytechnique, IP Paris, France 2Deezer Research, Paris, France. |
| Pseudocode | No | The paper describes the model architecture and steps using narrative text and mathematical equations, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://github.com/gasmichel/ Path NNs_expressive |
| Open Datasets | Yes | We use 3 publicly available datasets: (1) the Circular Skip Link (CSL) dataset (Murphy et al., 2019); (2) the EXP dataset (Abboud et al., 2021); (3) the SR dataset. We evaluate the proposed model on 6 datasets contained in the TUDataset collection (Morris et al., 2020a): DD, NCI1, PROTEINS, ENZYMES, IMDB-B and IMDB-M. We also evaluate the proposed model on ogbg-molhiv, a molecular property prediction dataset from the Open Graph Benchmark (OGB) (Hu et al., 2020). We conduct an experiment on the ZINC 12K dataset (Dwivedi et al., 2020). Finally, we experiment with Peptides-struct and Peptidesfunc (Dwivedi et al., 2022) |
| Dataset Splits | Yes | To evaluate the model s performance, we used 5-fold cross validation on CSL and 4-fold cross validation on EXP-Class. Following Errica et al. (2020), we evaluate TUDatasets using a 10-fold cross validation using their provided data splits. Details about the hyperparameter configuration can be found in Appendix D. |
| Hardware Specification | Yes | These experiments were run over an NVIDIA Tesla T4 GPU with 16GB of memory. |
| Software Dependencies | No | The paper describes the use of an Adam optimizer, MLPs, and LSTMs, but does not specify versions for any programming languages or libraries (e.g., PyTorch, TensorFlow) used for implementation. |
| Experiment Setup | Yes | For all experiments, the aggregation function ϕ of Equation (3) is set to the identity function and the normalization layer of Equation (2) is removed. We set initial node features to be vectors of ones, and process them using a 2-layer MLP. A 1-layer MLP is applied to the final graph representation to generate predictions... we train for 200 epochs using the Adam optimizer with learning rate 10 3. The hidden dimension size is set to 64. Batch sizes are set to 32 except for Path NN-AP where we set it to 8 for CSL and 16 for CEXP. Details about the hyperparameter configuration can be found in Appendix D. |