A graph similarity for deep learning

Authors: Seongmin Ok

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the applicability of our proposal in several experiments. First, we compare different aggregation methods from GNN literature for graph classification where transform-sum-cat outperforms the rest. Then, we build a simple GNN based on the same idea, which obtains (1) a higher accuracy than other popular GNNs on node classification, (2) a lower absolute error in graph regression and when used as a discriminator, (3) enhanced stability of the adversarial training of graph generation.
Researcher Affiliation Industry Seongmin Ok Samsung Advanced Institute of Technology Suwon, South Korea seongmin.ok@gmail.com
Pseudocode Yes Algorithm 1: Updating node attributes in Weisfeiler Leman similarity
Open Source Code Yes The code is available at https://github.com/se-ok/WLsimilarity.
Open Datasets Yes We test our idea, we implemented a WLS-based graph kernel. We report its classification accuracy on the TU datasets (Kersting et al. , 2016)...
Dataset Splits Yes In all experiments, except for graph generation, we use the experimental protocols from the benchmarking framework1 (Dwivedi et al. , 2020). For a fair comparison, the benchmark includes the datasets with fixed splits as well as reference implementations of popular GNN models... hyperparameters (including the number of iterations) are chosen based on the mean validation accuracy across the splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No While the paper mentions using "PyTorch" and "RDKit", it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We set k = 4 to compare the performance with other GNN models from Dwivedi et al. (2020). Each transformation φi is a three-layer multi-layer perceptron (MLP), where each layer consists of a linear transformation, 1D batch normalization, Re LU, and dropout in sequence. All hyperparameters are listed in Appendix E.