Provably Powerful Graph Networks

Authors: Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, Yaron Lipman

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimenting with our model on several real-world datasets that include classification and regression tasks on social networks, molecules, and chemical compounds, we found it to be on par or better than state of the art. Table 1: Graph Classification Results on the datasets from Yanardag and Vishwanathan (2015) Table 2: Regression, the QM9 dataset.
Researcher Affiliation Academia Haggai Maron Heli Ben-Hamu Hadar Serviansky Yaron Lipman Weizmann Institute of Science Rehovot, Israel
Pseudocode No The paper describes algorithms and models in text and mathematical notation, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets Yes For classification, we tested our method on eight real-world graph datasets from (Yanardag and Vishwanathan, 2015)... For the regression task, we conducted an experiment on a standard graph learning benchmark called the QM9 dataset (Ramakrishnan et al., 2014; Wu et al., 2018).
Dataset Splits Yes We follow the standard 10-fold cross validation protocol and splits from Zhang et al. (2018) and report our results according to the protocol described in Xu et al. (2019), namely the best averaged accuracy across the 10-folds. Parameter search was conducted on a fixed random 90%-10% split: learning rate in 5 10 5, 10 4, 5 10 4, 10 3 ; learning rate decay in [0.5, 1] every 20 epochs. The data is randomly split into 80% train, 10% validation and 10% test.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types).
Software Dependencies No We implemented the GNN model as described in Section 6 (see Equation 6) using the Tensor Flow framework (Abadi et al., 2016). The paper mentions TensorFlow but does not provide its version number or any other software dependencies with specific versions.
Experiment Setup Yes Parameter search was conducted on a fixed random 90%-10% split: learning rate in 5 10 5, 10 4, 5 10 4, 10 3 ; learning rate decay in [0.5, 1] every 20 epochs. We have tested three architectures: (1) b = 400, d = 2, and suffix (ii); (2) b = 400, d = 2, and suffix (i); and (3) b = 256, d = 3, and suffix (ii). We used three identical blocks B1, B2, B3, where in each block Bi : Rn2 a Rn2 b we took m3(x) = x to be the identity... m1, m2 : Ra Rb are chosen as d layer MLP with hidden layers of b features.