Random Walk Graph Neural Networks

Authors: Giannis Nikolentzos, Michalis Vazirgiannis

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the model s transparency on synthetic datasets. Furthermore, we empirically evaluate the model on graph classification datasets and show that it achieves competitive performance. 5 Experimental Evaluation In this Section, we empirically evaluate the proposed architecture on synthetic and real-world datasets, and we compare it to several baseline methods.
Researcher Affiliation Academia Giannis Nikolentzos École Polytechnique and AUEB nikolentzos@aueb.gr Michalis Vazirgiannis École Polytechnique and AUEB mvazirg@lix.polytechnique.fr
Pseudocode No The paper describes the model and its computations mathematically and in prose, but it does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluated the proposed model on 10 publicly available graph classification datasets including 5 bio/chemo-informatics datasets: MUTAG, D&D, NCI1, PROTEINS, ENZYMES, and 5 social interaction datasets: IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-MULTI5K, COLLAB [19].
Dataset Splits Yes We randomly split each dataset into a 90%/10% training/validation set. We perform 10-fold cross-validation to obtain an estimate of the generalization performance of each method, while within each fold a model is selected based on a 90%/10% split of the training set.
Hardware Specification No We also would like to thank the NVidia corporation for the donation of a GPU as part of their GPU grant program. This mention of a 'GPU' is too general and does not specify a model or other hardware details.
Software Dependencies No We use the implementations of the kernels contained in the Gra Ke L library [38]. We employed the LIBSVM implementation of the C-Support Vector Machine (SVM) classifier [7]. The paper mentions software tools used (Gra Ke L, LIBSVM) but does not provide specific version numbers for them.
Experiment Setup Yes We set the number of epochs to 50 and the batch size to 32. We use the Adam optimizer with learning rate 0.001. For all instances and all datasets, we set the batch size to 64 and the number of epochs to 500. We use the Adam optimizer with initial learning rate 0.01 and decay the learning rate by 0.5 every 50 epochs. We use a 1-layer perceptron to transform the vertex attributes. Batch normalization [15] is applied on the generated graph representations (i. e., matrix H). The hyper-parameters we tune for each dataset are: (1) the number of hidden graphs {8, 16}, (2) the number of vertices of the hidden graphs {5, 10}, (3) the dimensionality of the vertex features {16, 32, 64} for the bio/chemo-informatics datasets and {4, 8} for the social interaction datasets, (4) whether to normalize the obtained graph representations, and (5) the dropout ratio {0, 0.2}.