Counting Graph Substructures with Graph Neural Networks

Authors: Charilaos Kanatsoulis, Alejandro Ribeiro

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our analysis is constructive and enables the design of a generic GNN architecture that shows remarkable performance in four distinct tasks: cycle detection, cycle counting, graph classification, and molecular property prediction. ... Extensive numerical tests validate the effectiveness of the proposed approach across four distinct tasks: cycle detection, cycle counting, graph classification, and molecular property prediction.
Researcher Affiliation Academia Charilaos I. Kanatsoulis Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 kanac@seas.upenn.edu Alejandro Ribeiro Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 aribeiro@seas.upenn.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes To implement the proposed approach, which we denote as Moment-GNN1, we use the equivalent model, shown in Fig. 1b and Equation 14. 1https://github.com/Moment GNN
Open Datasets Yes We use the dataset and procedure described in Vignac et al. (2020)2. ... To this end, we consider the ZINC dataset, which consists of 12, 000 molecular graphs of different chemical compounds Irwin et al. (2012); Dwivedi et al. (2023). ... We also test the performance of Moment-GNN in classifying the graphs of REDDIT-B (2000 graphs, 2 classes, 429.6 Avg# nodes) and REDDIT-M (5000 graphs,5 classes, 508.5 Avg# nodes) datasets.
Dataset Splits Yes We use 10,000 of them for training, 1,000 for validation and 1,000 for testing. [Log P prediction] ... We use 70% of the graphs for training 20% for testing and 10% for validation. [ZINC nonagon/decagon detection]
Hardware Specification Yes The experiments are conducted on a Linux server with NVIDIA RTX 3080 GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies. It mentions using 'Adam' as an optimizer but not its version or other library versions.
Experiment Setup Yes The Moment-GNN layer is followed by a 6-layer GIN with, which we train with stochastic gradient descent optimizer, initial learning rate equal to 10 3, batch size equal to 16 and a dropout ratio equal to 0.5. For the synthetic data experiment, we use 300 epochs to train whereas for the ZINC data experiment, we use 1000 epochs. The hidden dimension for the GNN layers is 32 and for the output (classification layer) 128. ... The Moment-GNN layer is followed by 2 MPNN layers, which are trained with Adam for 1000 epochs, initial learning rate equal to 10 2, and batch size equal to 16. Each layer is followed by a 3-layer MLP with Re LU activation function and a batch normalization layer. The hidden dimension for the GNN layers is 128.