Memory-Based Graph Networks

Authors: Amir Hosein Khasahmadi, Kaveh Hassani, Parsa Moradi, Leo Lee, Quaid Morris

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that the proposed models achieve state-of-the-art results in eight out of nine graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data. Code and reference implementations are released at: https://github.com/amirkhas/Graph Memory Net
Researcher Affiliation Collaboration Amir Hosein Khasahmadi1,2 , Kaveh Hassani3, Parsa Moradi4, Leo Lee1,2, Quaid Morris1,2 1University of Toronto, Toronto, Canada 2Vector Institute, Toronto, Canada 3Autodesk AI Lab, Toronto, Canada 4Sharif University of Technology, Tehran, Iran
Pseudocode No The paper describes algorithms and formulations with mathematical equations, but it does not present any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code and reference implementations are released at: https://github.com/amirkhas/Graph Memory Net
Open Datasets Yes We use nine benchmarks including seven graph classification and two graph regression datasets to evaluate the proposed method. These datasets are commonly used in both graph kernel (Borgwardt & Kriegel, 2005; Yanardag & Vishwanathan, 2015; Shervashidze et al., 2009; Ying et al., 2018; Shervashidze et al., 2011; Kriege et al., 2016) and GNN (Cangea et al., 2018; Ying et al., 2018; Lee et al., 2019; Gao & Ji, 2019) literature. The summary of the datasets is as follows where the first two benchmarks are regression tasks and the rest are classification tasks. ESOL (Delaney, 2004) ... Lipophilicity (Gaulton et al., 2016) ... Bace (Subramanian et al., 2016) ... DD (Dobson & Doig, 2003) ... Enzymes (Schomburg et al., 2004) ... Proteins (Dobson & Doig, 2003) ... Collab (Yanardag & Vishwanathan, 2015) ... Reddit-Binary (Yanardag & Vishwanathan, 2015) ... Tox21 (Challenge, 2014).
Dataset Splits Yes To evaluate the performance of our models on DD, Enzymes, Proteins, Collab, and Reddit-Binary datasets, we follow the experimental protocol in (Ying et al., 2018) and perform 10-fold crossvalidation and report the mean accuracy over all folds. Table 1: Mean validation accuracy over 10-folds.
Hardware Specification No The paper describes software and training parameters but makes no mention of specific hardware (GPU model, CPU type, memory, etc.) used for experiments.
Software Dependencies No We implemented the model with Py Torch (Paszke et al., 2017) and optimized it using Adam (Kingma & Ba, 2014) optimizer. While PyTorch and Adam are mentioned, specific version numbers for PyTorch or other libraries are not provided. The citation year is not a version number for the software.
Experiment Setup Yes We trained the model for a maximum number of 2000 epochs and decayed the learning rate every 500 epochs by 0.5. The model uses batch-normalization (Ioffe & Szegedy, 2015), skip-connections, Leaky Relu activation functions, and dropout (Srivastava et al., 2014) for regularization. We also set the temperature in Students t-distribution to 1.0 and the restart probability in RWR to 0.1. The best performing hyper-parameters for the datasets are shown in Table 4. Table 4 lists specific values for #Keys, #Heads, #Layers, Hidden Dimension, and Batch size for each dataset.