The expressive power of pooling in Graph Neural Networks
Authors: Filippo Maria Bianchi, Veronica Lachi
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we introduce an experimental setup to verify empirically the expressive power of a GNN equipped with pooling layers, in terms of its capability to perform a graph isomorphism test. 4 Experimental Results To empirically confirm the theoretical results presented in Section 3, we designed a synthetic dataset that is specifically tailored to evaluate the expressive power of a GNN. |
| Researcher Affiliation | Academia | Filippo Maria Bianchi Dept. of Mathematics and Statistics Ui T the Arctic University of Norway NORCE, Norwegian Research Centre AS filippo.m.bianchi@uit.no Veronica Lachi Dept. of Information Engineering and Mathematics University of Siena veronica.lachi@student.unisi.it |
| Pseudocode | No | The paper describes algorithms and mathematical formulations but does not contain a discrete block labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | The EXPWL1 dataset and the code to reproduce the experimental results are publicly available2 https://github.com/FilippoMB/The-expressive-power-of-pooling-in-GNNs |
| Open Datasets | Yes | Therefore, we introduce a modified version of EXP called EXPWL1, which comprises a collection of graphs {G1, . . . , GN, H1, . . . , HN} representing propositional formulas that can be satisfiable or unsatisfiable. Each pair (Gi, Hi) in EXPWL1 consists of two non-isomorphic graphs distinguishable by a WL test, which encode formulas with opposite SAT outcomes. [...] The EXPWL1 dataset and the code to reproduce the experimental results are publicly available2. Table 4 reports the information about the datasets used in the experimental evaluation. |
| Dataset Splits | Yes | To ensure a fair comparison, when testing each method we shuffled the datasets and created 10 different train/validation/test splits using the same random seed. We trained each model on all splits for 500 epochs and reported the average training time and the average test accuracy obtained by the models that achieved the lowest loss on the validation set. |
| Hardware Specification | Yes | We gratefully acknowledge the support of Nvidia Corporation with the donation of the RTX A6000 GPUs used in this work. |
| Software Dependencies | No | For each pooling method, we used the implementation in Pytorch Geometric [17] with the default configuration. This only mentions 'Pytorch Geometric' without a version number, and no other software components are mentioned with specific versions. |
| Experiment Setup | Yes | The GNN architecture used in all experiments consists of: [2 GIN layers] [1 pooling layer with pooling ratio 0.1] [1 GIN layer] [global_sum_pool ] [dense readout]. Each GIN layer is configured with an MLP with 2 hidden layers of 64 units and ELU activation functions. The readout is a 3-layer MLP with units [64, 64, 32], ELU activations, and dropout 0.5. The GNN is trained with Adam optimizer with an initial learning rate of 1e-4 using batches with size 32. For SAGPool or ASAPool we used only 1 GIN layer before pooling. For Pan Pool we used 2 Pan Conv layers with filter size 2 instead of the first 2 GIN layers. The auxiliary losses in Diff Pool, Min Cut Pool, and DMo N are added to the cross-entropy loss with weights [0.1,0.1], [0.5, 1.0], [0.3, 0.3, 0.3], respectively. For k-MIS we used k = 5 and we aggregated the features with the sum. For Graclus, we aggregated the node features with the sum. |