Relational Pooling for Graph Representations

Authors: Ryan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, Bruno Ribeiro

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate improved performance of RP-based graph representations over state-of-the-art methods on a number of tasks. In our experiments, we classify graphs that cannot be distinguished by a state-of-the-art WL-GNN (Xu et al., 2019). We demonstrate empirically that these still lead to strong performance and can be used with RP-GNN to speed up graph classification when compared to traditional WL-GNNs.
Researcher Affiliation Academia 1Department of Statistics, and 2Department of Computer Science, Purdue University, West Lafayette, Indiana, USA. Correspondence to: Ryan L. Murphy <murph213@purdue.edu>.
Pseudocode No The paper describes algorithms and methods using mathematical notation and textual descriptions, but it does not provide any formal pseudocode blocks or sections explicitly labeled "Algorithm".
Open Source Code Yes Our code is on Git Hub2. 2https://github.com/PurdueMINDS/RelationalPooling
Open Datasets Yes We chose datasets from the Molecule Net project (Wu et al., 2018) which collects chemical datasets and collates the performance of various models that yield classification tasks and on which graph-based methods achieved superior performance3. In particular, we chose MUV (Rohrer & Baumann, 2009), HIV, and Tox21 (Mayr et al., 2016; Huang et al., 2016).
Dataset Splits Yes We evaluate GIN and RP-GIN with five-fold cross validation with balanced classes on both training and validation on this task. ... We train over 20 random data splits. ... evaluate using five random train/val/test splits.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., CPU, GPU models, or memory specifications).
Software Dependencies No The paper mentions using "Deep Chem" but does not specify a version number or list any other software dependencies with their versions, which is necessary for reproducibility.
Experiment Setup No Further implementation and training details are in the Supplementary Material. Model architectures, hyperparameters, and training procedures are detailed in the Supplementary Material.