Ordered Subgraph Aggregation Networks
Authors: Chendi Qian, Gaurav Rattan, Floris Geerts, Mathias Niepert, Christopher Morris
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we study the predictive performance of different subgraph-enhanced GNNs, showing that our data-driven architectures increase prediction accuracy on standard benchmark datasets compared to non-data-driven subgraph-enhanced graph neural networks while reducing computation time. 5 Experimental evaluation Here, we aim to empirically investigate the learning performance and efficiency of data-driven subgraph-enhanced GNNs, instances of the k-OSAN framework, compared to non-data-driven ones. |
| Researcher Affiliation | Academia | Chendi Qian Department of Computer Science TU Munich Gaurav Rattan Department of Computer Science RWTH Aachen University Floris Geerts Department of Computer Science University of Antwerp Christopher Morris Department of Computer Science RWTH Aachen University Mathias Niepert Department of Computer Science University of Stuttgart |
| Pseudocode | No | The paper describes its algorithms and models in prose and mathematical notation but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | Yes | All experimental results are fully reproducible from the source code provided at https://github.com/Spazierganger/OSAN. |
| Open Datasets | Yes | Datasets To compare our data-driven, subgraph-enhanced GNNs to non-data-driven ones and standard GNN baselines, we used the ALCHEMY [Chen et al., 2019a], the QM9 [Ramakrishnan et al., 2014, Wu et al., 2018], OGBG-MOLESOL [Hu et al., 2020], and the ZINC [Dwivedi et al., 2020, Jin et al., 2017] graph-level regression datasets; see Table 16 in Appendix C for dataset statistics and properties. In addition, we used the EXP dataset [Abboud et al., 2020] to investigate the additional expressive power of subgraph-enhanced GNNs over standard ones. All datasets, excluding EXP and OGBG-MOLESOL, are available from Morris et al. [2020a].5 https://chrsmrrs.github.io/datasets/ |
| Dataset Splits | Yes | Datasets To compare our data-driven, subgraph-enhanced GNNs to non-data-driven ones and standard GNN baselines, we used the ALCHEMY [Chen et al., 2019a], the QM9 [Ramakrishnan et al., 2014, Wu et al., 2018], OGBG-MOLESOL [Hu et al., 2020], and the ZINC [Dwivedi et al., 2020, Jin et al., 2017] graph-level regression datasets; see Table 16 in Appendix C for dataset statistics and properties. The use of 'standard benchmark datasets' and specific dataset citations implies the use of their predefined, well-established validation splits for evaluation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'GIN layers' and a 'GCN model' (which are neural network architectures), and 'batch norm and Re LU activation', but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | For all datasets and architectures, we used the competitive GIN layers [Xu et al., 2019] for the baselines and the downstream models. For data with (continuous) edge features, we used a 2-layer MLP to map them to the same number of components as the vertex features and combined them using summation. ... For all datasets and experiments, we used a GCN model [Kipf and Welling, 2017] consisting of three GCN layers, with batch norm and Re LU activation after each layer. We set the hidden dimensions to that of the downstream model one. ... We tune the weight for the auxiliary loss on the log scale, e.g., 0.1, 1, 10, and so on. |